Improving Graph-to-Text Generation Using Cycle Training

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

Natural Language Generation (NLG) from
graph structured data is an important step for
a number of tasks, including e.g. generating
explanations, automated reporting, and conver-
sational interfaces. Large generative language
models are currently the state of the art for open
ended NLG for graph data. However, these
models can produce erroneous text (termed hal-
lucinations). In this paper, we investigate the
application of cycle training in order to reduce
these errors. Cycle training involves alternating
the generation of text from an input graph with
the extraction of a knowledge graph where the
model should ensure consistency between the
extracted graph and the input graph. Our results
show that cycle training improves performance
on evaluation metrics (e.g., METEOR, DAE)
that consider syntactic and semantic relations,
and more in generally, that cycle training is use-
ful to reduce erroneous output when generating
text from graphs.
Original languageEnglish
Title of host publicationProceedings of the 4th Conference on Language, Data and Knowledge (LDK2023)
EditorsSara Carvalho, Anas Fahad Khan, Ana Ostroski Anic, Blerina Spahiu, Jorge Gracia, John P. McCrae, Dagmar Gromann, Barbara Heinisch, Ana Salgado
PublisherACL Anthology
Pages256-261
Number of pages6
ISBN (Print)9789895408153
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Improving Graph-to-Text Generation Using Cycle Training'. Together they form a unique fingerprint.

Cite this