Abstract
Argument Unit Recognition and Classification aims at identifying argument units from text and classifying them as pro or against. One of the design choices that need to be made when developing systems for this task is what the unit of classification should be: segments of tokens or full sentences. Previous research suggests that fine-tuning language models on the token-level yields more robust results for classifying sentences compared to training on sentences directly. We reproduce the study that originally made this claim and further investigate what exactly token-based systems learned better compared to sentence-based ones. We develop systematic tests for analysing the behavioural differences between the token-based and the sentence-based system. Our results show that token-based models are generally more robust than sentence-based models both on manually perturbed examples and on specific subpopulations of the data.
Original language | English |
---|---|
Title of host publication | Proceedings of the 9th Workshop on Argument Mining |
Publisher | International Conference on Computational Linguistics (COLING) |
Pages | 62-73 |
Number of pages | 12 |
Publication status | Published - 2022 |
Keywords
- cs.CL