2025-07-28

Research for increased trust and transparency in using AI models

Why researchers cannot explain how AI models arrive at certain results (black box problem) and why this is extremely important for very specific domains: Nadja Klein and Tim Bündert from SCC are conducting research on XAI – “explainable AI.”

KIT Podcast Campus Report: SCC researchers are attempting to counteract the black box problem of AI with explainable AI models in order to foster trust and transparency in critical AI applications (AI-generated symbolic image).

Artificial intelligence (AI) has become an integral part of our everyday lives. Without knowing how the linguistically appealing answers, realistic or futuristic-looking images, and super-fast decisions within the computer system are generated in a matter of seconds, we are increasingly using various AI tools in our work environment and private lives. AI is a helpful black box for users. However, unease should arise at the latest when AI models are used in self-driving cars or diagnostic systems in medicine, because it can be a matter of life and death if the AI makes incorrect, incomprehensible decisions.

Explainable AI (XAI) can be used to improve trust in the technology in such fields of application. XAI aims to make it possible to understand how AI works internally and according to which rules it generates results. Error diagnosis and thus improvement of the system is also possible with certain explainable AI models, which is not the case with large language models, for example.

Prof. Nadja Klein, research group leader at the SCC, is working with her team to shed light on the black box problem and, in a figurative sense, transform the opaque box that surrounds AI into a more transparent one that allows for greater insight than was previously possible. In the KIT research podcast "Campus Report", Stefan Fuchs talks to Prof. Nadja Klein and Tim Bündert (SCC, Methods for Big Data research group) about the challenges involved in developing transparent and trustworthy AI models.

Publications:

  • Martens, D., et al. (2025). Beware of "Explanations" of AI. arXiv preprint arXiv:2504.06791
  • Yanez Sarmiento, P., Witzke, S., Klein, N., & Renard, B.Y. (2024). Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance Propagation. In: Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024, 14944
  • Rügamer, D., Kolb, C., & Klein, N. (2023). Semi-Structured Distributional Regression. The American Statistician, 78(1), 88–99
  • Podcast Campus-Report: „Das Black-Box-Problem der Künstlichen Intelligenz“ – Forschende am Karlsruher Institut für Technologie arbeiten an Explainable AI (XAI) (Veröffentlicht am 22.07.2025, 1000183281)

 

Achim Grindler