Abstract
In this paper, we extend Abstract Meaning Representation (AMR) in order to represent situated multimodal dialogue, with a focus on the modality of gesture. AMR is a general-purpose meaning representation that has become popular for its transparent structure, its ease of annotation and available corpora, and its overall expressiveness. While AMR was designed to represent meaning in language as text or speech, gesture accompanying speech conveys a number of novel communicative dimensions, including situational reference, spatial locations, manner, attitude, orientation, backchanneling, and others. In this paper, we explore how to combine multimodal elements into a single representation for alignment and grounded meaning, using gesture as a case study. As a platform for multimodal situated dialogue annotation, we believe that Gesture AMR has several attractive properties. It is adequately expressive at both utterance and dialogue levels, while easily accommodating the structures inherent in gestural expressions. Further, the native reentrancy facilitates both the linking between modalities and the eventual situational grounding to contextual bindings.
| Original language | English |
|---|---|
| Title of host publication | Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Health, Operations Management, and Design |
| Subtitle of host publication | 13th International Conference, DHM 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings, Part II |
| Editors | Vincent G. Duffy |
| Publisher | Springer Science and Business Media Deutschland GmbH |
| Pages | 293-312 |
| Number of pages | 20 |
| ISBN (Electronic) | 9783031060182 |
| ISBN (Print) | 9783031060175 |
| DOIs | |
| Publication status | Published - 2022 |
| Externally published | Yes |
| Event | 13th International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, DHM 2022 Held as Part of the 24th HCI International Conference, HCII 2022 - Virtual, Online Duration: 26 Jun 2022 → 1 Jul 2022 |
Publication series
| Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
|---|---|
| Volume | 13320 LNCS |
| ISSN (Print) | 0302-9743 |
| ISSN (Electronic) | 1611-3349 |
Conference
| Conference | 13th International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, DHM 2022 Held as Part of the 24th HCI International Conference, HCII 2022 |
|---|---|
| City | Virtual, Online |
| Period | 26/06/22 → 1/07/22 |
Funding
This work was supported in part by NSF grant DRL 2019805, to Dr. Pustejovsky at Brandeis University, and an NSF Student Grant to Kenneth Lai, Richard Brutti, and Lucia Donatelli, also funded by NSF grant DRL 2019805. We would like to express our thanks to Nikhil Krishnaswamy for his comments on the multimodal framework motivating the development of Gesture AMR. The views expressed herein are ours alone.
| Funders | Funder number |
|---|---|
| National Science Foundation | DRL 2019805 |
| Brandeis University |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 16 Peace, Justice and Strong Institutions
Fingerprint
Dive into the research topics of 'Towards Situated AMR: Creating a Corpus of Gesture AMR'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver