Abstract
Disagreements are common in online societal deliberation and may be crucial for effective collaboration, for instance in helping users understand opposing viewpoints. Although there exist automated methods for recognizing disagreement, a deeper understanding of factors that influence disagreement is currently missing. We investigate a hypothesis that differences in personal values influence disagreement in online discussions. Using Large Language Models (LLMs) for estimating both profiles of personal values and disagreement, we conduct a large-scale experiment involving 11.4M user comments. We find that the dissimilarity of value profiles correlates with disagreement only in specific cases, but that incorporating self-reported value profiles changes these results to be more undecided.
Original language | English |
---|---|
Title of host publication | HHAI 2024: Hybrid Human AI Systems for the Social Good |
Subtitle of host publication | Proceedings of the Third International Conference on Hybrid Human-Artificial Intelligence |
Editors | Fabian Lorig, Jason Tucker, Adam Dahlgren Lindstrom, Frank Dignum, Pradeep Murukannaiah, Andreas Theodorou, Pinar Yolum |
Publisher | IOS Press BV |
Pages | 481-484 |
Number of pages | 4 |
ISBN (Electronic) | 9781643685229 |
DOIs | |
Publication status | Published - 2024 |
Event | 3rd International Conference on Hybrid Human-Artificial Intelligence, HHAI 2024 - Hybrid, Malmo, Sweden Duration: 10 Jun 2024 → 14 Jun 2024 |
Publication series
Name | Frontiers in Artificial Intelligence and Applications |
---|---|
Volume | 386 |
ISSN (Print) | 0922-6389 |
ISSN (Electronic) | 1879-8314 |
Conference
Conference | 3rd International Conference on Hybrid Human-Artificial Intelligence, HHAI 2024 |
---|---|
Country/Territory | Sweden |
City | Hybrid, Malmo |
Period | 10/06/24 → 14/06/24 |
Bibliographical note
Publisher Copyright:© 2024 The Authors.
Keywords
- hybrid intelligence
- natural language processing
- perspectives
- values