Abstract
When we hear another person laugh or scream, can we tell the kind of situation they are in – for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others’ vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.
Original language | English |
---|---|
Pages (from-to) | 277-295 |
Number of pages | 19 |
Journal | Cognition and Emotion |
Volume | 38 |
Issue number | 3 |
Early online date | 24 Nov 2023 |
DOIs | |
Publication status | Published - 2024 |
Funding
R.G.K. and D.A.S. are supported by ERC Starting grant no. 714977 awarded to D.A.S. We would like to thank Dr. Piera Filippi for useful comments on an earlier version of this manuscript.
Funders | Funder number |
---|---|
European Research Council | 714977 |