Description
The lack of annotated datasets for training and benchmarking is one of the main challenges of Clinical Natural Language Processing. In addition, current methods for collecting annotations attempt to minimize disagreement between annotators, and therefore fail to model the ambiguity inherent in language. We propose the CrowdTruth method for collecting medical ground truth through crowdsourcing, based on the observation that disagreement between annotators can signal ambiguity in the text, target semantics, or the worker's interpretation.
This repository contains a dataset of 3,984 English sentences for medical relation extraction, centering on the cause and treat medical relations, that have been processed with CrowdTruth disagreement analytics to capture ambiguity. In addition, we provide the raw crowdsourcing data used to compile this ground truth, as well as the task templates used to collect the data on CrowdFlower.
This repository contains a dataset of 3,984 English sentences for medical relation extraction, centering on the cause and treat medical relations, that have been processed with CrowdTruth disagreement analytics to capture ambiguity. In addition, we provide the raw crowdsourcing data used to compile this ground truth, as well as the task templates used to collect the data on CrowdFlower.
Date made available | 2016 |
---|---|
Publisher | Zenodo |