Functional and temporal relations between spoken and gestured components of language: A corpus-based inquiry

Kasper I. Kok*

*Corresponding author for this work

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Based on the Bielefeld Speech and Gesture Alignment Corpus (Lücking et al. 2013), this paper presents a systematic comparison of the linguistic characteristics of unimodal (speech only) and multimodal (gesture-accompanied) forms of language use. The results suggest that each of these two modes of expression is characterized by statistical preferences for certain types of words and grammatical categories. The words that are most frequently accompanied by a manual gesture, when controlled for their total frequency, include unspecific spatial lexemes, various deictic words, and particles that express difficulty in word retrieval or formulation. Other linguistic items, including pronouns and verbs of cognition, show a strong dispreference for being gesture-accompanied. The second part of the paper shows that gestures do not occur within a fixed time window relative to the word(s) they relate to, but the preferred temporal distance varies with the type of functional relation that exists between the verbal and gestural channel.

Original languageEnglish
Pages (from-to)1-26
Number of pages26
JournalInternational Journal of Corpus Linguistics
Volume22
Issue number1
DOIs
Publication statusPublished - 2017

Keywords

  • Distributional analysis
  • Gesture
  • Multimodal corpus
  • Relative frequency ratio

Fingerprint

Dive into the research topics of 'Functional and temporal relations between spoken and gestured components of language: A corpus-based inquiry'. Together they form a unique fingerprint.

Cite this