Description
Meaning and Understanding in Human-Centric AI (MUHAI) Benchmark
Task 1 (Short story generation with Knowledge Graphs and Language Models)
The dataset can be used to test understandability of text generated through the combination of knowledge graphs and language models without using knowledge graph embeddings.
The task here is to generate 5-sentence stories from a set of <em>subject-predicate-object</em> triples that are extracted from a knowledge graph. Two steps need to be performed:
1. Language model fine-tuning (SVO triple extraction + model fine-tuning)
2. Story generation (knowledge enrichment + text generation)
The submission includes the following data:
Original ROC stories corpus (100 stories)ROC stories encoded with relevant triples (extracted through SpaCy, 2 versions, with and without coreference resolution)Stories generated by the pre-trained model (GPT2-simple)Stories generated by the fine-tuned model (DICE + ConceptNet + DBpedia )Stories generated by the fine-tuned model (DICE + ConceptNet + DBpedia + WordNet )Stories generated by the GPT-2-keyword-generation (an open-source software that uses GPT-2 to generate text pertaining to the specified keywords)Model resultsEvaluation metrics descriptionUser-evaluation questionnaire
Code : https://github.com/kmitd/muhai-dice_story
Task 1 (Short story generation with Knowledge Graphs and Language Models)
The dataset can be used to test understandability of text generated through the combination of knowledge graphs and language models without using knowledge graph embeddings.
The task here is to generate 5-sentence stories from a set of <em>subject-predicate-object</em> triples that are extracted from a knowledge graph. Two steps need to be performed:
1. Language model fine-tuning (SVO triple extraction + model fine-tuning)
2. Story generation (knowledge enrichment + text generation)
The submission includes the following data:
Original ROC stories corpus (100 stories)ROC stories encoded with relevant triples (extracted through SpaCy, 2 versions, with and without coreference resolution)Stories generated by the pre-trained model (GPT2-simple)Stories generated by the fine-tuned model (DICE + ConceptNet + DBpedia )Stories generated by the fine-tuned model (DICE + ConceptNet + DBpedia + WordNet )Stories generated by the GPT-2-keyword-generation (an open-source software that uses GPT-2 to generate text pertaining to the specified keywords)Model resultsEvaluation metrics descriptionUser-evaluation questionnaire
Code : https://github.com/kmitd/muhai-dice_story
Date made available | 2022 |
---|---|
Publisher | Zenodo |