Extracting interpersonal stance from vocal signals

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

The role of emotions and other affective states within Human-Computer Interaction (HCI) is gaining importance. Introducing affect into computer applications typically makes these systems more efficient, effective and enjoyable. This paper presents a model that is able to extract interpersonal stance from vocal signals. To achieve this, a dataset of 3840 sentences spoken by 20 semiprofessional actors was built and was used to train and test a model based on Support Vector Machines (SVM). An analysis of the results indicates that there is much variation in the way people express interpersonal stance, which makes it difficult to build a generic model. Instead, the model shows good performance on the individual level (with accuracy above 80%). The implications of these findings for HCI systems are discussed.

LanguageEnglish
Title of host publicationProceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018
PublisherAssociation for Computing Machinery, Inc
Pages19-25
Number of pages7
ISBN (Electronic)1595930361, 9781450360760
DOIs
StatePublished - 16 Oct 2018
Event4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018 - Boulder, United States
Duration: 16 Oct 2018 → …

Conference

Conference4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018
CountryUnited States
CityBoulder
Period16/10/18 → …

Fingerprint

Computer Systems
Human computer interaction
Emotions
Computer applications
emotions
interaction
Support vector machines
emotion
extracts
performance
testing
Support Vector Machine
Datasets
support vector machines

Keywords

  • Dataset
  • HCI
  • Interpersonal Stance
  • Prosody
  • Speech analysis

Cite this

Formolo, D., & Bosse, T. (2018). Extracting interpersonal stance from vocal signals. In Proceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018 (pp. 19-25). Association for Computing Machinery, Inc. DOI: 10.1145/3279972.3279974
Formolo, Daniel ; Bosse, Tibor. / Extracting interpersonal stance from vocal signals. Proceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018. Association for Computing Machinery, Inc, 2018. pp. 19-25
@inproceedings{4b993d94d0ab47b0b77b857ea4d99669,
title = "Extracting interpersonal stance from vocal signals",
abstract = "The role of emotions and other affective states within Human-Computer Interaction (HCI) is gaining importance. Introducing affect into computer applications typically makes these systems more efficient, effective and enjoyable. This paper presents a model that is able to extract interpersonal stance from vocal signals. To achieve this, a dataset of 3840 sentences spoken by 20 semiprofessional actors was built and was used to train and test a model based on Support Vector Machines (SVM). An analysis of the results indicates that there is much variation in the way people express interpersonal stance, which makes it difficult to build a generic model. Instead, the model shows good performance on the individual level (with accuracy above 80{\%}). The implications of these findings for HCI systems are discussed.",
keywords = "Dataset, HCI, Interpersonal Stance, Prosody, Speech analysis",
author = "Daniel Formolo and Tibor Bosse",
year = "2018",
month = "10",
day = "16",
doi = "10.1145/3279972.3279974",
language = "English",
pages = "19--25",
booktitle = "Proceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018",
publisher = "Association for Computing Machinery, Inc",

}

Formolo, D & Bosse, T 2018, Extracting interpersonal stance from vocal signals. in Proceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018. Association for Computing Machinery, Inc, pp. 19-25, 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018, Boulder, United States, 16/10/18. DOI: 10.1145/3279972.3279974

Extracting interpersonal stance from vocal signals. / Formolo, Daniel; Bosse, Tibor.

Proceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018. Association for Computing Machinery, Inc, 2018. p. 19-25.

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - Extracting interpersonal stance from vocal signals

AU - Formolo,Daniel

AU - Bosse,Tibor

PY - 2018/10/16

Y1 - 2018/10/16

N2 - The role of emotions and other affective states within Human-Computer Interaction (HCI) is gaining importance. Introducing affect into computer applications typically makes these systems more efficient, effective and enjoyable. This paper presents a model that is able to extract interpersonal stance from vocal signals. To achieve this, a dataset of 3840 sentences spoken by 20 semiprofessional actors was built and was used to train and test a model based on Support Vector Machines (SVM). An analysis of the results indicates that there is much variation in the way people express interpersonal stance, which makes it difficult to build a generic model. Instead, the model shows good performance on the individual level (with accuracy above 80%). The implications of these findings for HCI systems are discussed.

AB - The role of emotions and other affective states within Human-Computer Interaction (HCI) is gaining importance. Introducing affect into computer applications typically makes these systems more efficient, effective and enjoyable. This paper presents a model that is able to extract interpersonal stance from vocal signals. To achieve this, a dataset of 3840 sentences spoken by 20 semiprofessional actors was built and was used to train and test a model based on Support Vector Machines (SVM). An analysis of the results indicates that there is much variation in the way people express interpersonal stance, which makes it difficult to build a generic model. Instead, the model shows good performance on the individual level (with accuracy above 80%). The implications of these findings for HCI systems are discussed.

KW - Dataset

KW - HCI

KW - Interpersonal Stance

KW - Prosody

KW - Speech analysis

UR - http://www.scopus.com/inward/record.url?scp=85058142243&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058142243&partnerID=8YFLogxK

U2 - 10.1145/3279972.3279974

DO - 10.1145/3279972.3279974

M3 - Conference contribution

SP - 19

EP - 25

BT - Proceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018

PB - Association for Computing Machinery, Inc

ER -

Formolo D, Bosse T. Extracting interpersonal stance from vocal signals. In Proceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018. Association for Computing Machinery, Inc. 2018. p. 19-25. Available from, DOI: 10.1145/3279972.3279974