Human vs. Computer performance in voice-based recognition of interpersonal stance

Research output: Chapter in Book / Report / Conference proceedingChapterAcademicpeer-review

Abstract

© 2017, Springer International Publishing AG. This paper presents an algorithm to automatically detect interpersonal stance in vocal signals. The focus is on two stances (referred to as ‘Dominant’ and ‘Empathic’) that play a crucial role in aggression de-escalation. To develop the algorithm, first a database was created with more than 1000 samples from 8 speakers from different countries. In addition to creating the algorithm, a detailed analysis of the samples was performed, in an attempt to relate interpersonal stance to emotional state. Finally, by means of an experiment via Mechanical Turk, the performance of the algorithm was compared with the performance of human beings. The resulting algorithm provides a useful basis to develop computer-based support for interpersonal skills training.
LanguageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer/Verlag
Pages672-686
Number of pages15
DOIs
StatePublished - 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10271

Fingerprint

Experiments

Keywords

  • Emotion recognition
  • Experiments
  • Interpersonal stance
  • Voice

Cite this

Formolo, D., & Bosse, T. (2017). Human vs. Computer performance in voice-based recognition of interpersonal stance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 672-686). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10271). Springer/Verlag. DOI: 10.1007/978-3-319-58071-5_51
Formolo, Daniel ; Bosse, Tibor. / Human vs. Computer performance in voice-based recognition of interpersonal stance. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer/Verlag, 2017. pp. 672-686 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inbook{80fed33d1ece4902948cd93ec32ba315,
title = "Human vs. Computer performance in voice-based recognition of interpersonal stance",
abstract = "{\circledC} 2017, Springer International Publishing AG. This paper presents an algorithm to automatically detect interpersonal stance in vocal signals. The focus is on two stances (referred to as ‘Dominant’ and ‘Empathic’) that play a crucial role in aggression de-escalation. To develop the algorithm, first a database was created with more than 1000 samples from 8 speakers from different countries. In addition to creating the algorithm, a detailed analysis of the samples was performed, in an attempt to relate interpersonal stance to emotional state. Finally, by means of an experiment via Mechanical Turk, the performance of the algorithm was compared with the performance of human beings. The resulting algorithm provides a useful basis to develop computer-based support for interpersonal skills training.",
keywords = "Emotion recognition, Experiments, Interpersonal stance, Voice",
author = "Daniel Formolo and Tibor Bosse",
year = "2017",
doi = "10.1007/978-3-319-58071-5_51",
language = "English",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer/Verlag",
pages = "672--686",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

Formolo, D & Bosse, T 2017, Human vs. Computer performance in voice-based recognition of interpersonal stance. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10271, Springer/Verlag, pp. 672-686. DOI: 10.1007/978-3-319-58071-5_51

Human vs. Computer performance in voice-based recognition of interpersonal stance. / Formolo, Daniel; Bosse, Tibor.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer/Verlag, 2017. p. 672-686 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10271).

Research output: Chapter in Book / Report / Conference proceedingChapterAcademicpeer-review

TY - CHAP

T1 - Human vs. Computer performance in voice-based recognition of interpersonal stance

AU - Formolo,Daniel

AU - Bosse,Tibor

PY - 2017

Y1 - 2017

N2 - © 2017, Springer International Publishing AG. This paper presents an algorithm to automatically detect interpersonal stance in vocal signals. The focus is on two stances (referred to as ‘Dominant’ and ‘Empathic’) that play a crucial role in aggression de-escalation. To develop the algorithm, first a database was created with more than 1000 samples from 8 speakers from different countries. In addition to creating the algorithm, a detailed analysis of the samples was performed, in an attempt to relate interpersonal stance to emotional state. Finally, by means of an experiment via Mechanical Turk, the performance of the algorithm was compared with the performance of human beings. The resulting algorithm provides a useful basis to develop computer-based support for interpersonal skills training.

AB - © 2017, Springer International Publishing AG. This paper presents an algorithm to automatically detect interpersonal stance in vocal signals. The focus is on two stances (referred to as ‘Dominant’ and ‘Empathic’) that play a crucial role in aggression de-escalation. To develop the algorithm, first a database was created with more than 1000 samples from 8 speakers from different countries. In addition to creating the algorithm, a detailed analysis of the samples was performed, in an attempt to relate interpersonal stance to emotional state. Finally, by means of an experiment via Mechanical Turk, the performance of the algorithm was compared with the performance of human beings. The resulting algorithm provides a useful basis to develop computer-based support for interpersonal skills training.

KW - Emotion recognition

KW - Experiments

KW - Interpersonal stance

KW - Voice

U2 - 10.1007/978-3-319-58071-5_51

DO - 10.1007/978-3-319-58071-5_51

M3 - Chapter

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 672

EP - 686

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

PB - Springer/Verlag

ER -

Formolo D, Bosse T. Human vs. Computer performance in voice-based recognition of interpersonal stance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer/Verlag. 2017. p. 672-686. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). Available from, DOI: 10.1007/978-3-319-58071-5_51