Towards Situated AMR: Creating a Corpus of Gesture AMR

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

In this paper, we extend Abstract Meaning Representation (AMR) in order to represent situated multimodal dialogue, with a focus on the modality of gesture. AMR is a general-purpose meaning representation that has become popular for its transparent structure, its ease of annotation and available corpora, and its overall expressiveness. While AMR was designed to represent meaning in language as text or speech, gesture accompanying speech conveys a number of novel communicative dimensions, including situational reference, spatial locations, manner, attitude, orientation, backchanneling, and others. In this paper, we explore how to combine multimodal elements into a single representation for alignment and grounded meaning, using gesture as a case study. As a platform for multimodal situated dialogue annotation, we believe that Gesture AMR has several attractive properties. It is adequately expressive at both utterance and dialogue levels, while easily accommodating the structures inherent in gestural expressions. Further, the native reentrancy facilitates both the linking between modalities and the eventual situational grounding to contextual bindings.
Original languageEnglish
Title of host publicationDigital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Health, Operations Management, and Design
Subtitle of host publication13th International Conference, DHM 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings, Part II
EditorsVincent G. Duffy
PublisherSpringer Science and Business Media Deutschland GmbH
Pages293-312
Number of pages20
ISBN (Electronic)9783031060182
ISBN (Print)9783031060175
DOIs
Publication statusPublished - 2022
Externally publishedYes
Event13th International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, DHM 2022 Held as Part of the 24th HCI International Conference, HCII 2022 - Virtual, Online
Duration: 26 Jun 20221 Jul 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13320 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference13th International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, DHM 2022 Held as Part of the 24th HCI International Conference, HCII 2022
CityVirtual, Online
Period26/06/221/07/22

Funding

This work was supported in part by NSF grant DRL 2019805, to Dr. Pustejovsky at Brandeis University, and an NSF Student Grant to Kenneth Lai, Richard Brutti, and Lucia Donatelli, also funded by NSF grant DRL 2019805. We would like to express our thanks to Nikhil Krishnaswamy for his comments on the multimodal framework motivating the development of Gesture AMR. The views expressed herein are ours alone.

FundersFunder number
National Science FoundationDRL 2019805
Brandeis University

    UN SDGs

    This output contributes to the following UN Sustainable Development Goals (SDGs)

    1. SDG 16 - Peace, Justice and Strong Institutions
      SDG 16 Peace, Justice and Strong Institutions

    Fingerprint

    Dive into the research topics of 'Towards Situated AMR: Creating a Corpus of Gesture AMR'. Together they form a unique fingerprint.

    Cite this