Amii XAI Lab

Vision and goals of the Amii Explainable AI (xAI) Lab

Vision

The modern vision of Artificial Intelligence (AI) is to transform data into models which can produce predictions, some of which can inform decisions and action. In applications with high impact (e.g., medical and legal decisions), the continuous improvement of the AI systems' models requires not only improved prediction accuracy, but also that they produce supporting explanations.

The Amii xAI lab vision is to press the frontiers of scalable methods to build domain models which support explanation of an AI system's behaviour.

This kind of "explainable AI" or "xAI" for short, is the basis for all kinds of AI System-Human interaction, for example to help debug the models, to train humans in situations requiring both knowledge and skill, and to interact with decision makers (e.g., clinicians, lawyers).

We are interested in all aspects of explanation, including its logical foundations, its role in debugging models, and its role in legal reasoning and argumentation, evidential reasoning in medicine, and persuasion across the spectrum of social sciences.

Methodology

Practically, “Explainable AI” has always meant that an AI system must not only achieve interesting performance on a task we think demonstrates intelligent behaviour (e.g., playing chess, diagnosing disease), but must maintain internal models that provide a basis for that AI system to explain its performance. This was demonstrated during the last AI boom, when expert systems were constructed to explain their reasoning, but knowledge acquisition didn’t scale -- now we want to accelerate the construction of explainable models with machine learning.

A nice analogy, which helps to distinguish good explanations from not-as-good explanations, is to think about whom you consider to have been your best ever teacher. People typically identify those people who could quickly find an intersection of the teacher-student world model that permitted the teacher to provide explanations that were easy for the student to understand. In the current AI boom, deep learning has accelerated the demand for commensurate acceleration of explainable AI, because, despite impressive performance on classification models from annotated data, those models are opaque “black boxes.”

Just a caution -- explanations are NEVER domain independent. This means we need to ensure that machine learning accelerates the scale at which domain knowledge is acquired, and, given that there are so many kinds of explanation, also captures knowledge at multiple scales and multiple perspectives -- just like that good teacher could do.