CrowdTruth 2.0: Quality metrics for crowdsourcing with disagreement

Anca Dumitrache, Oana Inel, Lora Aroyo, Benjamin Timmermans, Chris Welty

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

Typically crowdsourcing-based approaches to gather annotated data use inter-annotator agreement as a measure of quality. However, in many domains, there is ambiguity in the data, as well as a multitude of perspectives of the information examples. In this paper, we present ongoing work into the CrowdTruth metrics, that capture and interpret inter-annotator disagreement in crowdsourcing. The CrowdTruth metrics model the inter-dependency between the three main components of a crowdsourcing system – worker, input data, and annotation. The goal of the metrics is to capture the degree of ambiguity in each of these three components. The metrics are available online at https://github.com/CrowdTruth/CrowdTruth-core.

LanguageEnglish
Title of host publicationJoint Proceedings SAD 2018 and CrowdBias 2018
Subtitle of host publicationProceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018) co-located the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). Zürich, Switzerland, July 5, 2018
EditorsLora Aroyo, Anca Dumitrache
PublisherCEUR-WS
Pages11-18
Number of pages8
Publication statusPublished - 2018
Event1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management, SAD+CrowdBias 2018 - Zurich, Switzerland
Duration: 5 Jul 2018 → …

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR Workshop Proceedings
Volume2276
ISSN (Print)1613-0073

Conference

Conference1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management, SAD+CrowdBias 2018
CountrySwitzerland
CityZurich
Period5/07/18 → …

Cite this

Dumitrache, A., Inel, O., Aroyo, L., Timmermans, B., & Welty, C. (2018). CrowdTruth 2.0: Quality metrics for crowdsourcing with disagreement. In L. Aroyo, & A. Dumitrache (Eds.), Joint Proceedings SAD 2018 and CrowdBias 2018: Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018) co-located the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). Zürich, Switzerland, July 5, 2018 (pp. 11-18). (CEUR Workshop Proceedings; Vol. 2276). CEUR-WS.
Dumitrache, Anca ; Inel, Oana ; Aroyo, Lora ; Timmermans, Benjamin ; Welty, Chris. / CrowdTruth 2.0 : Quality metrics for crowdsourcing with disagreement. Joint Proceedings SAD 2018 and CrowdBias 2018: Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018) co-located the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). Zürich, Switzerland, July 5, 2018. editor / Lora Aroyo ; Anca Dumitrache. CEUR-WS, 2018. pp. 11-18 (CEUR Workshop Proceedings).
@inproceedings{f8f55cf94da84390ac34603fff324c10,
title = "CrowdTruth 2.0: Quality metrics for crowdsourcing with disagreement",
abstract = "Typically crowdsourcing-based approaches to gather annotated data use inter-annotator agreement as a measure of quality. However, in many domains, there is ambiguity in the data, as well as a multitude of perspectives of the information examples. In this paper, we present ongoing work into the CrowdTruth metrics, that capture and interpret inter-annotator disagreement in crowdsourcing. The CrowdTruth metrics model the inter-dependency between the three main components of a crowdsourcing system – worker, input data, and annotation. The goal of the metrics is to capture the degree of ambiguity in each of these three components. The metrics are available online at https://github.com/CrowdTruth/CrowdTruth-core.",
author = "Anca Dumitrache and Oana Inel and Lora Aroyo and Benjamin Timmermans and Chris Welty",
year = "2018",
language = "English",
series = "CEUR Workshop Proceedings",
publisher = "CEUR-WS",
pages = "11--18",
editor = "Lora Aroyo and Anca Dumitrache",
booktitle = "Joint Proceedings SAD 2018 and CrowdBias 2018",

}

Dumitrache, A, Inel, O, Aroyo, L, Timmermans, B & Welty, C 2018, CrowdTruth 2.0: Quality metrics for crowdsourcing with disagreement. in L Aroyo & A Dumitrache (eds), Joint Proceedings SAD 2018 and CrowdBias 2018: Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018) co-located the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). Zürich, Switzerland, July 5, 2018. CEUR Workshop Proceedings, vol. 2276, CEUR-WS, pp. 11-18, 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management, SAD+CrowdBias 2018, Zurich, Switzerland, 5/07/18.

CrowdTruth 2.0 : Quality metrics for crowdsourcing with disagreement. / Dumitrache, Anca; Inel, Oana; Aroyo, Lora; Timmermans, Benjamin; Welty, Chris.

Joint Proceedings SAD 2018 and CrowdBias 2018: Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018) co-located the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). Zürich, Switzerland, July 5, 2018. ed. / Lora Aroyo; Anca Dumitrache. CEUR-WS, 2018. p. 11-18 (CEUR Workshop Proceedings; Vol. 2276).

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - CrowdTruth 2.0

T2 - Quality metrics for crowdsourcing with disagreement

AU - Dumitrache, Anca

AU - Inel, Oana

AU - Aroyo, Lora

AU - Timmermans, Benjamin

AU - Welty, Chris

PY - 2018

Y1 - 2018

N2 - Typically crowdsourcing-based approaches to gather annotated data use inter-annotator agreement as a measure of quality. However, in many domains, there is ambiguity in the data, as well as a multitude of perspectives of the information examples. In this paper, we present ongoing work into the CrowdTruth metrics, that capture and interpret inter-annotator disagreement in crowdsourcing. The CrowdTruth metrics model the inter-dependency between the three main components of a crowdsourcing system – worker, input data, and annotation. The goal of the metrics is to capture the degree of ambiguity in each of these three components. The metrics are available online at https://github.com/CrowdTruth/CrowdTruth-core.

AB - Typically crowdsourcing-based approaches to gather annotated data use inter-annotator agreement as a measure of quality. However, in many domains, there is ambiguity in the data, as well as a multitude of perspectives of the information examples. In this paper, we present ongoing work into the CrowdTruth metrics, that capture and interpret inter-annotator disagreement in crowdsourcing. The CrowdTruth metrics model the inter-dependency between the three main components of a crowdsourcing system – worker, input data, and annotation. The goal of the metrics is to capture the degree of ambiguity in each of these three components. The metrics are available online at https://github.com/CrowdTruth/CrowdTruth-core.

UR - http://www.scopus.com/inward/record.url?scp=85058939580&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058939580&partnerID=8YFLogxK

M3 - Conference contribution

T3 - CEUR Workshop Proceedings

SP - 11

EP - 18

BT - Joint Proceedings SAD 2018 and CrowdBias 2018

A2 - Aroyo, Lora

A2 - Dumitrache, Anca

PB - CEUR-WS

ER -

Dumitrache A, Inel O, Aroyo L, Timmermans B, Welty C. CrowdTruth 2.0: Quality metrics for crowdsourcing with disagreement. In Aroyo L, Dumitrache A, editors, Joint Proceedings SAD 2018 and CrowdBias 2018: Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018) co-located the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). Zürich, Switzerland, July 5, 2018. CEUR-WS. 2018. p. 11-18. (CEUR Workshop Proceedings).