Transparency in Artificial Intelligence: Considering Explainability, User and System Factors
The increased use of artificial intelligence, shortform AI, systems leads to a variety of interdisciplinary challenges in the field of human-system interaction. One of the biggest challenges is the influence of transparency on trust, acceptance and usage of AI. While users, politicians and developers desire transparency, a user-centered definition is highly dependent on context, system factors, and user factors. Lack of transparency, on the other hand, leads to non-acceptance, algorithm aversion and poorly performing systems.
The project TAIGERS addresses the research gap of a user-centric definition of transparency, as well as a transfer into practicable recommendations for AI developers. In a close interdisciplinary cooperation of the chair of information management in mechanical engineering, shortform IMA and the chair of communication science, an AI transparency map is build based on user factors and system factors. This typology is consolidated in an AI transparency framework by a representative survey study and experimental investigations. Herein, questions of construct validity and user diversity will be addressed, as well as questions of information architecture in prospective and retrospective transparency settings. After discussions with AI developers, engineers and computer scientists, the framework will include the technical perspective as well.
The result of TAIGERS is an interdisciplinary evaluated framework of development recommendations that can be used and extended across all domains in AI research. The TAIGERS project thus lays a foundation for inter- and transdisciplinary collaboration between social science, engineering, and computer science in the highly relevant topic of transparent AI.