Abstract
Recent initiatives by cultural heritage institutions in addressing outdated and offensive language used in their collections demonstrate the need for further understanding into when terms are problematic or contentious. This paper presents an annotated dataset of 2,715 unique samples of terms in context, drawn from a historical newspaper archive, collating 21,800 annotations of contentiousness from expert and crowd workers. We describe the contents of the corpus by analysing inter-rater agreement and differences between experts and crowd workers. In addition, we demonstrate the potential of the corpus for automated detection of contentiousness. We show that a simple classifier applied to the embedding representation of a target word provides a better than baseline performance in predicting contentiousness. We find that the term itself and the context play a role in whether a term is considered contentious.
Original language | English |
---|---|
Title of host publication | K-CAP '21 |
Subtitle of host publication | Proceedings of the 11th on Knowledge Capture Conference |
Publisher | Association for Computing Machinery, Inc |
Pages | 17-24 |
Number of pages | 8 |
ISBN (Electronic) | 9781450384575 |
DOIs | |
Publication status | Published - Dec 2021 |
Event | 11th ACM International Conference on Knowledge Capture, K-CAP 2021 - Virtual, Online, United States Duration: 2 Dec 2021 → 3 Dec 2021 |
Conference
Conference | 11th ACM International Conference on Knowledge Capture, K-CAP 2021 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 2/12/21 → 3/12/21 |
Bibliographical note
Funding Information:This work was funded by the EuropeanaTech Challenge for Euro-peana Artificial Intelligence and Machine Learning datasets, ‘Culturally Aware AI’ funded by NWO, and SABIO funded by the Dutch Digital Heritage Network. The authors would like to thank the Cultural AI Lab and KNAW HuC colleagues for their comments and annotations and the anonymous Prolific annotators. Special thanks to Mirjam Cuper (National Library of the Netherlands) for guiding KB and Europeana procedures, Lynda Hardman (CWI) for the suggestions on the article editing, and the anonymous reviewers for their constructive feedback.
Publisher Copyright:
© 2021 ACM.
Keywords
- bias
- crowdsourcing
- datasets
- knowledge capture