Explainable Artificial Intelligence

Patrick Schrempf
Tuesday 15 August 2017

I am currently in my eighth week of my internship in the School of Computer Science. My supervisor Aaron is the head of the St Andrews Computer Human Interaction (SACHI) Research Group. It is great to be part of SACHI and to be surrounded by cutting edge research in the field. My research project is aiming to create a visualisation to explain an algorithm widely used in artificial intelligence.

Explainable Artificial Intelligence (XAI) is a new field of research within artificial intelligence (AI). The main idea is to try and create models of AI that are understandable and can therefore be trusted in the decisions they make. Currently, most systems that have an AI component can be described as “black box systems”. This means that users of the system can see the input and output of the system, but do not understand how the system processes the input to generate the output. For example, a self driving car might brake when there is no obvious reason to stop. The input to this system is coming from all the cameras and sensors in the car and the output is the car braking, however it is often difficult to tell why the algorithms are deciding what to do. Using an XAI system, it should be simpler to understand why the algorithms decide what they do. Now that AI is being integrated into more and more systems, it is becoming essential to explain vital decisions. Imagine an AI system that decides whether or not to operate on a patient in a hospital. The decisions of the system will have to be backed up and explained whether that is to the patient’s family or in front of court.

The first few weeks of my project were spent trying to read up on papers that are very difficult to find at the moment. The term XAI has only just started to be explored and therefore a quick search in Google Scholar is not enough to bring up a long list of relevant results. However, after searching various digital libraries it turns out that there has recently been research into a specific area of AI. These techniques are trying to explain neural networks (in particular deep learning) and the decisions these systems make. An example can be seen in figure 1 below. Here the AI system “explains” its decision by highlighting an area of the picture that contributed highly to that decision. In figure 2 below,the AI system “explains” its bird classification in form of a sentence.

Figure 1: Using a heat map to explain decisions [1]

Figure 2: Explaining decisions by generating text explanations [2]

I decided to base my project on previous work that had left a perfect gap for XAI – RadarCat. The RadarCat system classifies and detects objects that are placed upon it using a small radar sensor (Google Soli). The system works by training a specific machine learning classifier on the radar data. (Machine learning is a small subcategory of AI.) Although we could classify all these objects, it is not easy to explain RadarCat’s decisions. Because most of the XAI methods that have been researched to date use visual techniques, I decided to create a visualisation that would dynamically adapt to the classifier used in the RadarCat system.

With just over two weeks remaining, my focus is on running a user study to gather feedback on the effectiveness of the visualisation. Never having run a user study, this will be an interesting and challenging new experience. Furthermore, I will be trying to summarise my findings in a project report and poster.

[1] Grad-CAM: Why did you say that?; Selvaraju, R. et al.; 11/2016
[2] Generating Visual Explanations; Hendricks, L. et al.; 03/2016

Related topics

Share this story

Leave a reply

You must be logged in to post a comment.