Perturbations and Subpopulations for Testing Robustness in Token-Based Argument Unit Recognition

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

137394 Downloads (Pure)

Abstract

Argument Unit Recognition and Classification aims at identifying argument units from text and classifying them as pro or against. One of the design choices that need to be made when developing systems for this task is what the unit of classification should be: segments of tokens or full sentences. Previous research suggests that fine-tuning language models on the token-level yields more robust results for classifying sentences compared to training on sentences directly. We reproduce the study that originally made this claim and further investigate what exactly token-based systems learned better compared to sentence-based ones. We develop systematic tests for analysing the behavioural differences between the token-based and the sentence-based system. Our results show that token-based models are generally more robust than sentence-based models both on manually perturbed examples and on specific subpopulations of the data.
Original languageEnglish
Title of host publicationProceedings of the 9th Workshop on Argument Mining
PublisherInternational Conference on Computational Linguistics (COLING)
Pages62-73
Number of pages12
Publication statusPublished - 2022

Keywords

  • cs.CL

Fingerprint

Dive into the research topics of 'Perturbations and Subpopulations for Testing Robustness in Token-Based Argument Unit Recognition'. Together they form a unique fingerprint.

Cite this