Abstract
Several metrics have been proposed for assessing the similarity of (abstract) meaning representations (AMRs), but little is known about how they relate to human similarity ratings. Moreover, the current metrics have complementary strengths and weaknesses: Some emphasize speed, while others make the alignment of graph structures explicit, at the price of a costly alignment step. In this work we propose new Weisfeiler-Leman AMR similarity metrics that unify the strengths of previous metrics, while mitigating their weaknesses. Specifically, our new metrics are able to match contextualized substructures and induce n:m alignments between their nodes. Furthermore, we introduce a Benchmark for AMR Metrics based on Overt Objectives (BAMBOO ), the first benchmark to support empirical assessment of graph-based MR similarity metrics. BAMBOO maximizes the interpretability of results by defining multiple overt objectives that range from sentence similarity objectives to stress tests that probe a metric’s robustness against meaning-altering and meaning-preserving graph transformations. We show the benefits of BAMBOO by profiling previous metrics and our own metrics. Results indicate that our novel metrics may serve as a strong baseline for future work.
Original language | English |
---|---|
Pages (from-to) | 1425-1441 |
Number of pages | 17 |
Journal | Transactions of the Association for Computational Linguistics |
Volume | 9 |
Early online date | 17 Dec 2021 |
DOIs | |
Publication status | Published - 2021 |
Bibliographical note
Funding Information:We are grateful to three anonymous reviewers and Action Editor Yue Zhang for their valuable comments that have helped to improve this paper. We are also thankful to Philipp Wiesenbach for giving helpful feedback on a draft of this paper. This work has been partially funded by the DFG through the project ACCEPT as part of the Priority Program ??Robust Argumentation Machines?? (SPP1999).
Publisher Copyright:
© 2021 Association for Computational Linguistics.