Abstract
The role of emotions and other affective states within Human-Computer Interaction (HCI) is gaining importance. Introducing affect into computer applications typically makes these systems more efficient, effective and enjoyable. This paper presents a model that is able to extract interpersonal stance from vocal signals. To achieve this, a dataset of 3840 sentences spoken by 20 semiprofessional actors was built and was used to train and test a model based on Support Vector Machines (SVM). An analysis of the results indicates that there is much variation in the way people express interpersonal stance, which makes it difficult to build a generic model. Instead, the model shows good performance on the individual level (with accuracy above 80%). The implications of these findings for HCI systems are discussed.
Original language | English |
---|---|
Title of host publication | Proceedings of the 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018 |
Publisher | Association for Computing Machinery, Inc |
Pages | 19-25 |
Number of pages | 7 |
ISBN (Electronic) | 9781450360760 |
DOIs | |
Publication status | Published - 16 Oct 2018 |
Event | 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018 - Boulder, United States Duration: 16 Oct 2018 → … |
Conference
Conference | 4th Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, MA3HMI 2018 - In conjunction with ICMI 2018 |
---|---|
Country/Territory | United States |
City | Boulder |
Period | 16/10/18 → … |
Funding
This research was supported by the Brazilian scholarship program Science without Borders - CNPq {scholarship reference: 233883/2014-2}.
Funders | Funder number |
---|---|
Conselho Nacional de Desenvolvimento Científico e Tecnológico | 233883/2014-2 |
Keywords
- Dataset
- HCI
- Interpersonal Stance
- Prosody
- Speech analysis