Abstract
In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.
Original language | English |
---|---|
Article number | e2210666120 |
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | Proceedings of the National Academy of Sciences of the United States of America |
Volume | 120 |
Issue number | 7 |
Early online date | 7 Feb 2023 |
DOIs | |
Publication status | Published - 14 Feb 2023 |
Bibliographical note
Funding Information:ACKNOWLEDGMENTS. The study was funded by the grant of the Volkswagen Foundation to R.H., S.L., and S.M.H. (Initiative “Artificial Intelligence and the Society of the Future”). S.L. was supported by a Research Award from the Humboldt Foundation in Germany while this research was conducted. S.L. also acknowledges financial support from the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO). The authors thank the Ipsos Observer team for their help with data collection. We are also grateful to Spela Vrtovec for research assistance, to Deb Ain for editing the manuscript, and to our colleagues at the Center for Adaptive Rationality for their feedback and productive discussions.
Publisher Copyright:
Copyright © 2023 the Author(s).
Funding
ACKNOWLEDGMENTS. The study was funded by the grant of the Volkswagen Foundation to R.H., S.L., and S.M.H. (Initiative “Artificial Intelligence and the Society of the Future”). S.L. was supported by a Research Award from the Humboldt Foundation in Germany while this research was conducted. S.L. also acknowledges financial support from the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO). The authors thank the Ipsos Observer team for their help with data collection. We are also grateful to Spela Vrtovec for research assistance, to Deb Ain for editing the manuscript, and to our colleagues at the Center for Adaptive Rationality for their feedback and productive discussions.
Funders | Funder number |
---|---|
Society of the Future | |
Alexander von Humboldt-Stiftung | |
European Research Council | 101020961 |
European Research Council | |
Volkswagen Foundation |
Keywords
- conjoint experiment
- content moderation
- harmful content
- moral dilemma
- online speech