Human vs. Computer performance in voice-based recognition of interpersonal stance

Daniel Formolo, Tibor Bosse

Research output: Chapter in Book / Report / Conference proceedingChapterAcademicpeer-review

Abstract

© 2017, Springer International Publishing AG. This paper presents an algorithm to automatically detect interpersonal stance in vocal signals. The focus is on two stances (referred to as ‘Dominant’ and ‘Empathic’) that play a crucial role in aggression de-escalation. To develop the algorithm, first a database was created with more than 1000 samples from 8 speakers from different countries. In addition to creating the algorithm, a detailed analysis of the samples was performed, in an attempt to relate interpersonal stance to emotional state. Finally, by means of an experiment via Mechanical Turk, the performance of the algorithm was compared with the performance of human beings. The resulting algorithm provides a useful basis to develop computer-based support for interpersonal skills training.
Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer/Verlag
Pages672-686
Number of pages15
DOIs
Publication statusPublished - 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10271

Keywords

  • Emotion recognition
  • Experiments
  • Interpersonal stance
  • Voice

Fingerprint Dive into the research topics of 'Human vs. Computer performance in voice-based recognition of interpersonal stance'. Together they form a unique fingerprint.

Cite this