Abstract
Automatic assessment of the quality of arguments has been recognized as a challenging task with significant implications for misinformation and targeted speech. While real-world arguments are tightly anchored in context, existing computational methods analyze their quality in isolation, which affects their accuracy and generalizability. We propose SPARK: a novel method for scoring argument quality based on contextualization via relevant knowledge. We devise four augmentations that leverage large language models to provide feedback, infer hidden assumptions, supply a similar-quality argument, or give a counter-argument. SPARK uses a dual-encoder Transformer architecture to enable the original argument and its augmentation to be considered jointly. Our experiments in both in-domain and zero-shot setups show that SPARK consistently outperforms existing techniques across multiple metrics.
| Original language | English |
|---|---|
| Pages | 316-326 |
| Number of pages | 11 |
| DOIs | |
| Publication status | Published - 2024 |
| Event | 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 - Hybrid, Mexico City, Mexico Duration: 16 Jun 2024 → 21 Jun 2024 |
Conference
| Conference | 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 |
|---|---|
| Country/Territory | Mexico |
| City | Hybrid, Mexico City |
| Period | 16/06/24 → 21/06/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.