Kary Främling, Professor in data science, with an emphasis in data analysis and machine learning and Head of explainable AI (XAI) team at The Department of Computing Science, Umeå University.
Abstract: Explaining the results of AI systems in ways that are understandable for different categories of end-users has been a challenge for AI since its beginning. This challenge has become even greater in recent years with the increasingly complex ensemble and deep learning models, leading to the development of numerous so-called explainable AI (XAI) methods. In practice, it seems like a majority of XAI research is focusing on trying to explain image classification, e.g. "I believe there's a cat in this image and I show you where it is". Such explanations are visually attractive and might help developers of deep neural networks to assess how well their network has learned what it is supposed to. However, such XAI methods do not give added value to the citizen whose loan application was refused or who didn't get called to a job interview, as decided by an AI system. It can be doubted whether current XAI methods are capable of providing such insight and whether they actually allow detecting bias or discrimination in AI systems. Some solutions are presented, such as the Contextual Importance and Utility (CIU) method.