Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice

Saar Alon-Barkat, Madalina Busuioc

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Artificial intelligence algorithms are increasingly adopted as decisional aides by public bodies, with the promise of overcoming biases of human decision-makers. At the same time, they may introduce new biases in the human–algorithm interaction. Drawing on psychology and public administration literatures, we investigate two key biases: overreliance on algorithmic advice even in the face of “warning signals” from other sources (automation bias), and selective adoption of algorithmic advice when this corresponds to stereotypes (selective adherence). We assess these via three experimental studies conducted in the Netherlands: In study 1 (N = 605), we test automation bias by exploring participants’ adherence to an algorithmic prediction compared to an equivalent human-expert prediction. We do not find evidence for automation bias. In study 2 (N = 904), we replicate these findings, and also test selective adherence. We find a stronger propensity for adherence when the advice is aligned with group stereotypes, with no significant differences between algorithmic and human-expert advice. In study 3 (N = 1,345), we replicate our design with a sample of civil servants. This study was conducted shortly after a major scandal involving public authorities’ reliance on an algorithm with discriminatory outcomes (the “childcare benefits scandal”). The scandal is itself illustrative of our theory and patterns diagnosed empirically in our experiment, yet in our study 3, while supporting our prior findings as to automation bias, we do not find patterns of selective adherence. We suggest this is driven by bureaucrats’ enhanced awareness of discrimination and algorithmic biases in the aftermath of the scandal. We discuss the implications of our findings for public sector decision making in the age of automation. Overall, our study speaks to potential negative effects of automation of the administrative state for already vulnerable and disadvantaged citizens.
Original languageEnglish
Pages (from-to)153–169
Number of pages17
JournalJournal of Public Administration Research and Theory
Volume33
Issue number1
Early online date8 Feb 2022
DOIs
Publication statusPublished - Jan 2023

Funding

This article is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 716439).

FundersFunder number
European Research Council
Horizon 2020716439
Horizon 2020

    Fingerprint

    Dive into the research topics of 'Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice'. Together they form a unique fingerprint.

    Cite this