Will It Blend? Mixing Training Paradigms & Prompting for Argument Quality Prediction

Michiel van der Meer*, Myrthe Reuver, Urja Khurana, Lea Krause, Selene Baez Santamaria

*Corresponding author for this work

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

This paper describes our contributions to the Shared Task of the 9th Workshop on Argument Mining (2022). Our approach uses Large Language Models for the task of Argument Quality Prediction. We perform prompt engineering using GPT-3, and also investigate the training paradigms multi-task learning, contrastive learning, and intermediate-task training. We find that a mixed prediction setup outperforms single models. Prompting GPT-3 works best for predicting argument validity, and argument novelty is best estimated by a model trained using all three training paradigms.
Original languageEnglish
Title of host publicationProceedings of the 9th Workshop on Argument Mining
PublisherInternational Conference on Computational Linguistics (COLING)
Pages95–103
Number of pages8
Publication statusPublished - Oct 2022

Fingerprint

Dive into the research topics of 'Will It Blend? Mixing Training Paradigms & Prompting for Argument Quality Prediction'. Together they form a unique fingerprint.

Cite this