Cross-Domain Toxic Spans Detection

Research output: Working paper / PreprintPreprintAcademic

12 Downloads (Pure)

Abstract

Given the dynamic nature of toxic language use, automated methods for detecting toxic spans are likely to encounter distributional shift. To explore this phenomenon, we evaluate three approaches for detecting toxic spans under cross-domain conditions: lexicon-based, rationale extraction, and fine-tuned language models. Our findings indicate that a simple method using off-the-shelf lexicons performs best in the cross-domain setup. The cross-domain error analysis suggests that (1) rationale extraction methods are prone to false negatives, while (2) language models, despite performing best for the in-domain case, recall fewer explicitly toxic words than lexicons and are prone to certain types of false positives. Our code is publicly available at: https://github.com/sfschouten/toxic-cross-domain.
Original languageEnglish
Pages1-17
Number of pages17
Publication statusPublished - 16 Jun 2023

Bibliographical note

NLDB 2023

Keywords

  • cs.CL
  • cs.LG
  • Cross-Domain Toxic Spans Detection

    Schouten, S. F., Barbarestani, B., Tufa, W., Vossen, P. & Markov, I., 2023, Natural Language Processing and Information Systems: 28th International Conference on Applications of Natural Language to Information Systems, NLDB 2023, Derby, UK, June 21–23, 2023, Proceedings. Métais, E., Meziane, F., Manning, W., Reiff-Marganiec, S. & Sugumaran, V. (eds.). Springer Science and Business Media Deutschland GmbH, p. 533-545 13 p. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); vol. 13913 LNCS).

    Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

    Open Access
    File
    11 Downloads (Pure)

Cite this