Collecting High-Quality Dialogue User Satisfaction Ratings with Third-Party Annotators

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

1 Downloads (Pure)

Abstract

The design, evaluation and adaptation of conversational information systems are typically guided by ratings from third-party, i.e. non-user, annotators. Interfaces used in gathering such ratings are designed in an ad-hoc fashion as it has not yet been investigated which design yields high-quality ratings. This work describes how to design user interfaces for gathering high-quality ratings with third-party annotators. In a user study, we compare a base interface that consolidates best practices from literature, an interface with clear definitions and an interface in which tasks are separated visually. We find that these interfaces yield annotations of high quality and separation of tasks. We find no significant improvements in quality between UIs. This work can serve as a starting point for researchers and practitioners interested in collecting high-quality dialogue user satisfaction ratings using third-party annotators.
Original languageEnglish
Title of host publicationCHIIR 2020
Subtitle of host publicationProceedings of the 2020 Conference on Human Information Interaction and Retrieval
PublisherACM
Pages363-367
Number of pages5
ISBN (Print)9781450368926
DOIs
Publication statusPublished - Apr 2020

Keywords

  • Human-centered Computing
  • Natural language interfaces
  • user interface design
  • information systems
  • search interfaces

VU Research Profile

  • Connected World

Fingerprint

Dive into the research topics of 'Collecting High-Quality Dialogue User Satisfaction Ratings with Third-Party Annotators'. Together they form a unique fingerprint.

Cite this