Validation methodology for expert-annotated datasets: Event annotation case study

Oana Inel, Lora Aroyo

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

Event detection is still a difficult task due to the complexity and the ambiguity of such entities. On the one hand, we observe a low inter-annotator agreement among experts when annotating events, disregarding the multitude of existing annotation guidelines and their numerous revisions. On the other hand, event extraction systems have a lower measured performance in terms of F1-score compared to other types of entities such as people or locations. In this paper we study the consistency and completeness of expert-annotated datasets for events and time expressions. We propose a data-agnostic validation methodology of such datasets in terms of consistency and completeness. Furthermore, we combine the power of crowds and machines to correct and extend expert-annotated datasets of events. We show the benefit of using crowd-annotated events to train and evaluate a state-of-the-art event extraction system. Our results show that the crowd-annotated events increase the performance of the system by at least 5.3%.

Original languageEnglish
Title of host publication2nd Conference on Language, Data and Knowledge, LDK 2019
EditorsGerard de Melo, Bettina Klimek, Christian Fath, Paul Buitelaar, Milan Dojchinovski, Maria Eskevich, John P. McCrae, Christian Chiarcos
PublisherSchloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
ISBN (Electronic)9783959771054
DOIs
Publication statusPublished - 1 May 2019
Event2nd Conference on Language, Data and Knowledge, LDK 2019 - Leipzig, Germany
Duration: 20 May 201923 May 2019

Publication series

NameOpenAccess Series in Informatics
Volume70
ISSN (Print)2190-6807

Conference

Conference2nd Conference on Language, Data and Knowledge, LDK 2019
CountryGermany
CityLeipzig
Period20/05/1923/05/19

Fingerprint

Annotation
expert
methodology
Methodology
event
train
Completeness
Event Detection
performance
Evaluate
state of the art
detection

Keywords

  • Crowdsourcing
  • Event extraction
  • Human-in-the-loop
  • Time extraction

Cite this

Inel, O., & Aroyo, L. (2019). Validation methodology for expert-annotated datasets: Event annotation case study. In G. de Melo, B. Klimek, C. Fath, P. Buitelaar, M. Dojchinovski, M. Eskevich, J. P. McCrae, ... C. Chiarcos (Eds.), 2nd Conference on Language, Data and Knowledge, LDK 2019 [12] (OpenAccess Series in Informatics; Vol. 70). Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing. https://doi.org/10.4230/OASIcs.LDK.2019.12
Inel, Oana ; Aroyo, Lora. / Validation methodology for expert-annotated datasets : Event annotation case study. 2nd Conference on Language, Data and Knowledge, LDK 2019. editor / Gerard de Melo ; Bettina Klimek ; Christian Fath ; Paul Buitelaar ; Milan Dojchinovski ; Maria Eskevich ; John P. McCrae ; Christian Chiarcos. Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing, 2019. (OpenAccess Series in Informatics).
@inproceedings{ff689ec98a164d179d6d1d45b5f1aff7,
title = "Validation methodology for expert-annotated datasets: Event annotation case study",
abstract = "Event detection is still a difficult task due to the complexity and the ambiguity of such entities. On the one hand, we observe a low inter-annotator agreement among experts when annotating events, disregarding the multitude of existing annotation guidelines and their numerous revisions. On the other hand, event extraction systems have a lower measured performance in terms of F1-score compared to other types of entities such as people or locations. In this paper we study the consistency and completeness of expert-annotated datasets for events and time expressions. We propose a data-agnostic validation methodology of such datasets in terms of consistency and completeness. Furthermore, we combine the power of crowds and machines to correct and extend expert-annotated datasets of events. We show the benefit of using crowd-annotated events to train and evaluate a state-of-the-art event extraction system. Our results show that the crowd-annotated events increase the performance of the system by at least 5.3{\%}.",
keywords = "Crowdsourcing, Event extraction, Human-in-the-loop, Time extraction",
author = "Oana Inel and Lora Aroyo",
year = "2019",
month = "5",
day = "1",
doi = "10.4230/OASIcs.LDK.2019.12",
language = "English",
series = "OpenAccess Series in Informatics",
publisher = "Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing",
editor = "{de Melo}, Gerard and Bettina Klimek and Christian Fath and Paul Buitelaar and Milan Dojchinovski and Maria Eskevich and McCrae, {John P.} and Christian Chiarcos",
booktitle = "2nd Conference on Language, Data and Knowledge, LDK 2019",

}

Inel, O & Aroyo, L 2019, Validation methodology for expert-annotated datasets: Event annotation case study. in G de Melo, B Klimek, C Fath, P Buitelaar, M Dojchinovski, M Eskevich, JP McCrae & C Chiarcos (eds), 2nd Conference on Language, Data and Knowledge, LDK 2019., 12, OpenAccess Series in Informatics, vol. 70, Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing, 2nd Conference on Language, Data and Knowledge, LDK 2019, Leipzig, Germany, 20/05/19. https://doi.org/10.4230/OASIcs.LDK.2019.12

Validation methodology for expert-annotated datasets : Event annotation case study. / Inel, Oana; Aroyo, Lora.

2nd Conference on Language, Data and Knowledge, LDK 2019. ed. / Gerard de Melo; Bettina Klimek; Christian Fath; Paul Buitelaar; Milan Dojchinovski; Maria Eskevich; John P. McCrae; Christian Chiarcos. Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing, 2019. 12 (OpenAccess Series in Informatics; Vol. 70).

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - Validation methodology for expert-annotated datasets

T2 - Event annotation case study

AU - Inel, Oana

AU - Aroyo, Lora

PY - 2019/5/1

Y1 - 2019/5/1

N2 - Event detection is still a difficult task due to the complexity and the ambiguity of such entities. On the one hand, we observe a low inter-annotator agreement among experts when annotating events, disregarding the multitude of existing annotation guidelines and their numerous revisions. On the other hand, event extraction systems have a lower measured performance in terms of F1-score compared to other types of entities such as people or locations. In this paper we study the consistency and completeness of expert-annotated datasets for events and time expressions. We propose a data-agnostic validation methodology of such datasets in terms of consistency and completeness. Furthermore, we combine the power of crowds and machines to correct and extend expert-annotated datasets of events. We show the benefit of using crowd-annotated events to train and evaluate a state-of-the-art event extraction system. Our results show that the crowd-annotated events increase the performance of the system by at least 5.3%.

AB - Event detection is still a difficult task due to the complexity and the ambiguity of such entities. On the one hand, we observe a low inter-annotator agreement among experts when annotating events, disregarding the multitude of existing annotation guidelines and their numerous revisions. On the other hand, event extraction systems have a lower measured performance in terms of F1-score compared to other types of entities such as people or locations. In this paper we study the consistency and completeness of expert-annotated datasets for events and time expressions. We propose a data-agnostic validation methodology of such datasets in terms of consistency and completeness. Furthermore, we combine the power of crowds and machines to correct and extend expert-annotated datasets of events. We show the benefit of using crowd-annotated events to train and evaluate a state-of-the-art event extraction system. Our results show that the crowd-annotated events increase the performance of the system by at least 5.3%.

KW - Crowdsourcing

KW - Event extraction

KW - Human-in-the-loop

KW - Time extraction

UR - http://www.scopus.com/inward/record.url?scp=85068074385&partnerID=8YFLogxK

U2 - 10.4230/OASIcs.LDK.2019.12

DO - 10.4230/OASIcs.LDK.2019.12

M3 - Conference contribution

T3 - OpenAccess Series in Informatics

BT - 2nd Conference on Language, Data and Knowledge, LDK 2019

A2 - de Melo, Gerard

A2 - Klimek, Bettina

A2 - Fath, Christian

A2 - Buitelaar, Paul

A2 - Dojchinovski, Milan

A2 - Eskevich, Maria

A2 - McCrae, John P.

A2 - Chiarcos, Christian

PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing

ER -

Inel O, Aroyo L. Validation methodology for expert-annotated datasets: Event annotation case study. In de Melo G, Klimek B, Fath C, Buitelaar P, Dojchinovski M, Eskevich M, McCrae JP, Chiarcos C, editors, 2nd Conference on Language, Data and Knowledge, LDK 2019. Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing. 2019. 12. (OpenAccess Series in Informatics). https://doi.org/10.4230/OASIcs.LDK.2019.12