When to explain: Identifying explanation triggers in human-agent interaction

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract


With more agents deployed than ever, users need to be able to interact and cooperate with them in an effective and comfortable manner. Explanations have been shown to increase the understanding and trust of a user in human-agent interaction. There have been numerous studies investigating this effect, but they rely on the user explicitly requesting an explanation. We propose a first overview of when an explanation should be triggered and show that there are many instances that would be missed if the agent solely relies on direct questions. For this, we differentiate between direct triggers such as commands or questions and introduce indirect triggers like confusion or uncertainty detection.
Original languageEnglish
Title of host publication2nd Workshop on interactive natural language technology for explainable artificial intelligence
EditorsJose M. Alonso, Alejandro Catala
Place of PublicationDublin
PublisherACL Anthology
Pages55-60
Number of pages6
Publication statusPublished - Nov 2020

Fingerprint

Dive into the research topics of 'When to explain: Identifying explanation triggers in human-agent interaction'. Together they form a unique fingerprint.

Cite this