Extracting interpersonal stance from vocal signals

Daniel Formolo, Tibor Bosse

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

The role of emotions and other affective states within Human-Computer Interaction (HCI) is gaining importance. Introducing affect into computer applications typically makes these systems more efficient, effective and enjoyable. This paper presents a model that is able to extract interpersonal stance from vocal signals. To achieve this, a dataset of 3840 sentences spoken by 20 semiprofessional actors was built and was used to train and test a model based on Support Vector Machines (SVM). An analysis of the results indicates that there is much variation in the way people express interpersonal stance, which makes it difficult to build a generic model. Instead, the model shows good performance on the individual level (with accuracy above 80%). The implications of these findings for HCI systems are discussed.

Original languageEnglish
Title of host publicationProceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018
PublisherAssociation for Computing Machinery, Inc
Pages19-25
Number of pages7
ISBN (Electronic)9781450360760
DOIs
Publication statusPublished - 16 Oct 2018
Event4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018 - Boulder, United States
Duration: 16 Oct 2018 → …

Conference

Conference4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018
Country/TerritoryUnited States
CityBoulder
Period16/10/18 → …

Funding

This research was supported by the Brazilian scholarship program Science without Borders - CNPq {scholarship reference: 233883/2014-2}.

FundersFunder number
Conselho Nacional de Desenvolvimento Científico e Tecnológico233883/2014-2

    Keywords

    • Dataset
    • HCI
    • Interpersonal Stance
    • Prosody
    • Speech analysis

    Fingerprint

    Dive into the research topics of 'Extracting interpersonal stance from vocal signals'. Together they form a unique fingerprint.

    Cite this