Functions of gestures during disfluent and fluent speech in simultaneous interpreting

Alan Cienki*

*Corresponding author for this work

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

This study investigates what types (functions) of gestures occur during disfluencies in speech production during simultaneous interpreting as compared with gesture use during fluent interpreting. Forty-nine participants interpreted two ten-minute audio segments of popular science lectures, one from their first language to their second language and one from their L2 to their L1. The results show that during both fluent and disfluent moments of interpreting, the participants primarily used pragmatic gestures (such as marking emphasis) and self-adapters (e.g., rubbing their fingers). We can conclude that this points to the potentially different kind of thinking that is involved in speaking for simultaneous interpreting than is normally involved in thinking for spontaneous conversation or unrehearsed narratives. Self-adapters may assist the interpreters in the presentation of ideas and help with speech production. The low use of representational gestures may reflect the lack of deep semantic processing during simultaneous interpreting—not the kind of rich mental simulation which might give rise to depiction in gesture—and be a factor of the temporal constraints that do not allow for producing detailed gestural forms. Future research could involve comparison of gestures used by interpreters accompanying their own spontaneous speech with those they use while interpreting.

Original languageEnglish
Pages (from-to)29-46
Number of pages18
JournalParallèles
Volume37
Issue number1
Early online date29 Apr 2025
DOIs
Publication statusPublished - Apr 2025

Bibliographical note

Publisher Copyright:
© 2025, University of Geneva. All rights reserved.

Funding

The research was supported by Russian Science Foundation grant number 19-18-00357 for the project \u201CVerbal and co-verbal behavior under cognitive pressure: Analyses of speech, gesture, and eye gaze\u201D carried out at Moscow State Linguistic University. Credit for the data collection, annotation, and analysis is due to the following members of the PoliMod Lab at the Centre for Socio-Cognitive Discourse Studies (SCoDis); in alphabetical order: Olga Agafonova, Olga Iriskhanova, Varvara Kharitonova, Anna Leonteva, Alina Makoveyeva, Andrey Petrov, Olga Prokofeva, Evgeniya Smirnova, and Maria Tomskaya, as well as Geert Br\u00F4ne.

FundersFunder number
PoliMod Lab
Maria Tomskaya
Russian Science Foundation19-18-00357

    Keywords

    • disfluencies
    • pragmatic gestures
    • self-adapters
    • Simultaneous interpreting
    • thinking for speaking

    Fingerprint

    Dive into the research topics of 'Functions of gestures during disfluent and fluent speech in simultaneous interpreting'. Together they form a unique fingerprint.

    Cite this