Abstract
Interview data is multimodal data: it consists of speech sound, facial expression and gestures, captured in a particular situation, and containing textual information and emotion. This workshop shows how a multidisciplinary approach may exploit the full potential of interview data. The workshop first gives a systematic overview of the research fields working with interview data. It then presents the speech technology currently available to support transcribing and annotating interview data, such as automatic speech recognition, speaker diarization, and emotion detection. Finally, scholars who work with interview data and tools may present their work and discover how to make use of existing technology.
Original language | English |
---|---|
Title of host publication | ICMI 2020 |
Subtitle of host publication | Proceedings of the 2020 International Conference on Multimodal Interaction |
Publisher | ACM |
Pages | 886-887 |
Number of pages | 2 |
ISBN (Electronic) | 9781450375818 |
DOIs | |
Publication status | Published - Oct 2020 |