Crowdsourcing ground truth for Question Answering using CrowdTruth

B.F.L. Timmermans, L.M. Aroyo, C.A. Welty

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review


Gathering training and evaluation data for open domain tasks, such as general question answering, is a challenging task. Typically, ground truth data is provided by human expert annotators, however, in an open domain experts are difficult to define. Moreover, the overall process for annotating examples can be lengthy and expensive. Naturally, crowdsourcing has become a mainstream approach for filling this gap, i.e. gathering human interpretation data. However, similar to the traditional expert annotation tasks, most of those methods use majority voting to measure the quality of the annotations and thus aim at identifying a single right answer for each example, despite the fact that many annotation tasks can have multiple interpretations, which results in multiple correct answers to the same question. We present a crowdsourcing-based approach for efficiently gathering ground truth data called CrowdTruth, where disagreement-based metrics are used to harness the multitude of human interpretation and measure the quality of the resulting ground truth. We exemplify our approach in two semantic interpretation use cases for answering questions.
Original languageEnglish
Title of host publicationProceedings of the ACM WebScience conference
Place of PublicationOxford
ISBN (Print)9781450336727
Publication statusPublished - 2015
EventWebSci ’15 - Oxford
Duration: 28 Jun 20151 Jul 2015


ConferenceWebSci ’15


Dive into the research topics of 'Crowdsourcing ground truth for Question Answering using CrowdTruth'. Together they form a unique fingerprint.

Cite this