Ebermann, C., Selisky, M., & Weibelzahl, S. (2022). Explainable AI: The Effect of Contradictory Decisions and Explanations on Users’ Acceptance of AI Systems. International Journal of Human-Computer Interaction, online first, 1-20. doi: 10.1080/10447318.2022.2126812

DOI: 10.1080/10447318.2022.2126812

Providing explanations of an artificial intelligence (AI) system has been suggested as a means to increase users’ acceptance during the decision-making process. However, little research has been done to examine the psychological mechanism of how these explanations cause a positive or negative reaction in the user. To address this gap, we investigate the effect on user acceptance if decisions and the associated provided explanations contradict between an AI system and the user. An interdisciplinary research model was derived and validated by an experiment with 78 participants. Findings suggest that in decision situations with cognitive misfit users experience negative mood significantly more often and have a negative evaluation of the AI system’s support. Therefore, the following article provides further guidance regarding new interdisciplinary approaches for dealing with human-AI interaction during the decision-making process and sheds some light on how explainable AI can increase users’ acceptance of such systems.

@article{ebermann-ijhci22,
author = {Carolin Ebermann and Matthias Selisky and Stephan Weibelzahl},
title = {Explainable {AI}: The Effect of Contradictory Decisions and Explanations on Users’ Acceptance of {AI} Systems},
journal = {International Journal of Human–Computer Interaction},
year = {2022},
volume = "online first",
publisher = {Taylor {&} Francis},
pages = {1--20},
doi = {10.1080/10447318.2022.2126812}
}