Abstract
Algorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over an algorithmic system demands an algorithmic literacy that also entails a certain way of making oneself knowable: users must interrogate their own dispositions and see how these can be formalized such that they can be translated into the algorithmic system. This may, however, extend already existing practices through which people are monitored and probed and means that exerting such control requires users to direct a computational mode of thinking at themselves.
Original language | English |
---|---|
Pages (from-to) | 195-205 |
Number of pages | 11 |
Journal | AI and Society |
Volume | 39 |
Issue number | 1 |
DOIs | |
Publication status | Published - Feb 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© The Author(s) 2022.
Funding
Open Access funding enabled and organized by Projekt DEAL. The author discloses receipt of the following financial support for the research, authorship, and/or publication of this article: this research has been conducted within the project “Deciding about, by, and together with algorithmic decision-making systems”, funded by the Volkswagen Foundation.
Funders | Funder number |
---|---|
Volkswagen Foundation |
Keywords
- Algorithm bias
- Algorithmic decision-making
- Algorithmic literacy
- Surveillance