With more agents deployed than ever, users need to be able to interact and cooperate with them in an effective and comfortable manner. Explanations have been shown to increase the understanding and trust of a user in human-agent interaction. There have been numerous studies investigating this effect, but they rely on the user explicitly requesting an explanation. We propose a first overview of when an explanation should be triggered and show that there are many instances that would be missed if the agent solely relies on direct questions. For this, we differentiate between direct triggers such as commands or questions and introduce indirect triggers like confusion or uncertainty detection.
|Title of host publication||2nd Workshop on interactive natural language technology for explainable artificial intelligence|
|Editors||Jose M. Alonso, Alejandro Catala|
|Place of Publication||Dublin|
|Number of pages||6|
|Publication status||Published - Nov 2020|