TY - GEN
T1 - Towards Situated AMR
T2 - 13th International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, DHM 2022 Held as Part of the 24th HCI International Conference, HCII 2022
AU - Donatelli, Lucia
AU - Lai, Kenneth
AU - Brutti, Richard
AU - Pustejovsky, James
PY - 2022
Y1 - 2022
N2 - © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.In this paper, we extend Abstract Meaning Representation (AMR) in order to represent situated multimodal dialogue, with a focus on the modality of gesture. AMR is a general-purpose meaning representation that has become popular for its transparent structure, its ease of annotation and available corpora, and its overall expressiveness. While AMR was designed to represent meaning in language as text or speech, gesture accompanying speech conveys a number of novel communicative dimensions, including situational reference, spatial locations, manner, attitude, orientation, backchanneling, and others. In this paper, we explore how to combine multimodal elements into a single representation for alignment and grounded meaning, using gesture as a case study. As a platform for multimodal situated dialogue annotation, we believe that Gesture AMR has several attractive properties. It is adequately expressive at both utterance and dialogue levels, while easily accommodating the structures inherent in gestural expressions. Further, the native reentrancy facilitates both the linking between modalities and the eventual situational grounding to contextual bindings.
AB - © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.In this paper, we extend Abstract Meaning Representation (AMR) in order to represent situated multimodal dialogue, with a focus on the modality of gesture. AMR is a general-purpose meaning representation that has become popular for its transparent structure, its ease of annotation and available corpora, and its overall expressiveness. While AMR was designed to represent meaning in language as text or speech, gesture accompanying speech conveys a number of novel communicative dimensions, including situational reference, spatial locations, manner, attitude, orientation, backchanneling, and others. In this paper, we explore how to combine multimodal elements into a single representation for alignment and grounded meaning, using gesture as a case study. As a platform for multimodal situated dialogue annotation, we believe that Gesture AMR has several attractive properties. It is adequately expressive at both utterance and dialogue levels, while easily accommodating the structures inherent in gestural expressions. Further, the native reentrancy facilitates both the linking between modalities and the eventual situational grounding to contextual bindings.
UR - http://www.scopus.com/inward/record.url?scp=85133023761&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-06018-2_21
DO - 10.1007/978-3-031-06018-2_21
M3 - Conference contribution
SN - 9783031060175
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 293
EP - 312
BT - Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Health, Operations Management, and Design - 13th International Conference, DHM 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Proceedings
A2 - Duffy, V.G.
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 26 June 2022 through 1 July 2022
ER -