Deep Learning Image Classification : How AI Can Explain Its Thinking
Deep Learning Image Classification : How AI Can Explain Its Thinking Can we trust artificial intelligence?
New deep learning models for artificial intelligence have proved to be highly effective in detecting cancer and producing fully autonomous cars. Unfortunately, these new algorithms struggle to explain their decision-making processes to their human controllers.
Recently, though, a team of international researchers taught AI to support its reasoning with textual and visual evidence, opening a window into the ‘block box’ of AI.
In their paper published on February 15, 2018. The team outlined their work on a potential model for artificial intelligence. Which they have aptly named the “Pointing and Justification Explanation,” (PJ-X).
Using this model, artificial intelligence gives textual and visual evidence for their decisions. For example, when given an image of a baseball game and asked what sport is depicted. As a result, the AI responds that it is baseball and highlights the areas of the image that it thinks are important.
In addition, in the case of the baseball game, the AI will highlight the player’s bat, the ball, and the catcher’s mitt as justification for its analysis of the image.
In conclusion, the team then combined the visual justifications with textual explanations. Moreover, when given an image of zebras and asked if the animals are in a zoo, the AI will provide a short fragment of text to explain its decision such as, “No because the zebras are standing in a green field.” As a result, learning how to point to visual evidence helps the AI generate the their respective textual explanations. Furthermore, the effectiveness of their model demonstrates the value of developing a multimodal justification system.