When to explain: Identifying explanation triggers in human-agent interaction
Published in Proceedings of the 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence @ INLG, 2020
Recommended citation: Krause, L., & Vossen, P. (2020). When to explain: Identifying explanation triggers in human-agent interaction. 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 55–60. https://www.aclweb.org/anthology/2020.nl4xai-1.12 https://www.aclweb.org/anthology/2020.nl4xai-1.12.pdf
With more agents deployed than ever, users need to be able to interact and cooperate with them in an effective and comfortable manner. Explanations have been shown to increase the understanding and trust of a user in humanagent interaction. There have been numerous studies investigating this effect, but they rely on the user explicitly requesting an explanation. We propose a first overview of when an explanation should be triggered and show that there are many instances that would be missed if the agent solely relies on direct questions. For this, we differentiate between direct triggers such as commands or questions and introduce indirect triggers like confusion or uncertainty detection.
Recommended citation: Krause, L., & Vossen, P. (2020). When to explain: Identifying explanation triggers in human-agent interaction. 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 55–60. https://www.aclweb.org/anthology/2020.nl4xai-1.12
Bibtex