Abstract
The climate crisis is a salient issue in online discussions, and hypocrisy accusations are a central rhetorical element in these debates. However, for large-scale text analysis, hypocrisy accusation detection is an understudied tool, most often defined as a smaller subtask of fallacious argument detection. In this paper, we define hypocrisy accusation detection as an independent task in NLP, and identify different relevant subtypes of hypocrisy accusations. Our Climate Hypocrisy Accusation Corpus (CHAC) consists of 420 Reddit climate debate comments, expert-annotated into two different types of hypocrisy accusations: personal versus political hypocrisy. We evaluate few-shot in-context learning with 6 shots and 3 instruction-tuned Large Language Models (LLMs) for detecting hypocrisy accusations in this dataset. Results indicate that the GPT-4o and Llama-3 models in particular show promise in detecting hypocrisy accusations (F1 reaching 0.68, while previous work shows F1 of 0.44). However, context matters for a complex semantic concept such as hypocrisy accusations, and we find models struggle especially at identifying political hypocrisy accusations compared to personal moral hypocrisy. Our study contributes new insights in hypocrisy detection and climate change discourse, and is a stepping stone for large-scale analysis of hypocrisy accusation in online climate debates.
Original language | Undefined/Unknown |
---|---|
Title of host publication | Proceedings of the 4th Workshop on Computational Linguistics for the Political and Social Sciences |
Subtitle of host publication | Long and short papers |
Editors | Christopher Klamm, Gabriella Lapesa, Simone Paolo Ponzetto, Ines Rehbein, Indira Sen |
Place of Publication | Vienna, Austria |
Publisher | Association for Computational Linguistics |
Pages | 45-60 |
Number of pages | 16 |
Publication status | Published - 1 Sept 2024 |