Abstract
Although artificial intelligence is blamed for many societal challenges, it also has underexplored potential in political contexts online. We rely on six preregistered experiments in three countries (N = 6,728) to test the expectation that AI and AI-assisted humans would be perceived more favorably than humans (a) across various content moderation, generation, and recommendation scenarios and (b) when exposing individuals to counter-attitudinal political information. Contrary to the preregistered hypotheses, participants see human agents as more just than AI across the scenarios tested, with the exception of news recommendations. At the same time, participants are not more open to counter-attitudinal information attributed to AI rather than a human or an AI-assisted human. These findings, which—with minor variations—emerged across countries, scenarios, and issues, suggest that human intervention is preferred online and that people reject dissimilar information regardless of its source. We discuss the theoretical and practical implications of these findings.
Lay Summary
In the era of unprecedented political divides and misinformation, artificial intelligence (AI) and algorithms are often seen as the culprits. In contrast to these dominant narratives, we argued that AI might be seen as being less biased than a human in online political contexts. We relied on six preregistered experiments in three countries (the United Sates, Spain, Poland) to test whether internet users perceive AI and AI-assisted humans more favorably than simply humans; (a) across various distinct scenarios online, and (b) when exposing people to opposing political information on a range of contentious issues. Contrary to our expectations, human agents were consistently perceived more favorably than AI except when recommending news. These findings suggest that people prefer human intervention in most online political contexts.
Lay Summary
In the era of unprecedented political divides and misinformation, artificial intelligence (AI) and algorithms are often seen as the culprits. In contrast to these dominant narratives, we argued that AI might be seen as being less biased than a human in online political contexts. We relied on six preregistered experiments in three countries (the United Sates, Spain, Poland) to test whether internet users perceive AI and AI-assisted humans more favorably than simply humans; (a) across various distinct scenarios online, and (b) when exposing people to opposing political information on a range of contentious issues. Contrary to our expectations, human agents were consistently perceived more favorably than AI except when recommending news. These findings suggest that people prefer human intervention in most online political contexts.
| Original language | English |
|---|---|
| Pages (from-to) | 223-243 |
| Number of pages | 21 |
| Journal | Journal of Computer-Mediated Communication |
| Volume | 26 |
| Issue number | 4 |
| Early online date | 14 Jun 2021 |
| DOIs | |
| Publication status | Published - Jul 2021 |
Bibliographical note
Publisher Copyright:© 2021 The Author(s). Published by Oxford University Press on behalf of International Communication Association.
Funding
| Funders | Funder number |
|---|---|
| Horizon 2020 Framework Programme | 756301 |
Keywords
- AI
- Algorithms
- Artificial Intelligence
- Bias
- Biased information processing
- Content moderation
- Counter-attitudinal views
- News
- News recommendations
- Online moderation
- Perceived justice
- Polarization
- Social media