top of page
DARPA XAI IMAGE.png

Tutorial Outline

1. Overview of Explainable and Interpretable AI

        – what it means, it’s various forms, general approaches

​

2. Explainability in the context of computer vision

        – a review of various approaches proposed

        - a comparison of these approaches

        – their strengths and weaknesses

​

3. The problem of adversarial attacks and how it affects CNN models

​

4. A new approach to Explainable AI

​

         - teaching a multilayer perceptron (MLP) the composition of objects

           from parts and the connectivity between the object parts

         - decoding a CNN model to identify parts of objects

         - the MLP as a symbolic model

         - the advantages of symbolic models, automation of image processing

          using symbolic models

​

5. Some experimental results with the new approach

            - detecting objects in satellite images

​

6. Adversarial attacks and parts-based object recognition

            - how parts-based object recognition is a defense against adversarial

               attacks

            - some experimental results

​

Tutorial Abstract

Along with the advent of deep learning and its quick adoption, there is concern about using models that we don’t really understand. And because of this concern, many critical applications of deep learning are hindered. The concern about transparency and trustworthiness of these models is so high that it is now a major research focus of Artificial Intelligence (AI) programs at funding agencies like DARPA https://www.darpa.mil/program/explainable-artificial-intelligence and NSF in the US. If we can make deep learning explainable and transparent, the economic impact of such a technology would be in the trillions of dollars.

 

One of the specific forms of Explainable AI (XAI) envisioned by DARPA includes the recognition of objects based on identification of their parts. For example, the form requires that to predict an object to be a cat, the system must also recognize some of the specific features of a cat, such as the fur, whiskers, and claws. Object prediction contingent on recognition of parts provides additional verification for the object and makes the prediction robust and trustworthy.

 

The first part of this tutorial will review some of the existing methods of XAI in general and then those that are specific to computer vision.

 

The second part of this tutorial will cover a new method that decodes a convolutional neural network (CNN) to recognize parts of objects. The method teaches a second model the composition of objects from parts and the connectivity between the parts. This second model is a symbolic and transparent model. Experimental results will be discussed including those related to object detection in satellite images. Contrary to conventional wisdom, experimental results show that part-based models can maintain the accuracy of basic CNN models. Experimental results also show part-based models can provide protection from adversarial attacks. Thus, a school bus will not become an ostrich with the tweak of a few pixels.

Product
Featured
Contact
Demo
bottom of page