Can we trust artificial intelligence?
From cancer detection to autonomous vehicles, new deep learning models for artificial intelligence have proved to be highly effective in a wide range of fields. Unfortunately, these new algorithms struggle to explain their decision-making process to their human controllers. AI’s inability to provide justifications is both a legal hurdle for the technology’s progression as well as a source of distrust for the general public.
A few years ago, however, a team of international researchers taught AI to support its reasoning with visual evidence and textual explanations, opening a window into the ‘block box’ of AI.
In a paper published on February 15, 2018, the team outlined their work on a new model for artificial intelligence, one they have aptly named the “Pointing and Justification Explanation,” (PJ-X). Using this model, artificial intelligence gives visual and textual evidence to explain its decisions.
For example, when given an image of a baseball game and asked what sport is depicted, the AI responds that it is baseball. The AI then highlights the areas of the image that it thinks are important. In the case of the baseball game, the AI highlights the player’s bat, the ball, and the catcher’s mitt as justification for its analysis of the image.
The team then combines the visual justifications with textual explanations. When given an image of zebras and asked if the animals are in a zoo, the AI provides a short fragment of text to explain its decision, “No, because the zebras are standing in a green field.” The effectiveness of their model demonstrates the value of developing a multimodal justification system.
Moving forward, as AI is deployed throughout society, it will become imperative for industry leaders to be able to justify the method behind the technology. While many praise the progress made towards explainable decision-making, some experts believe that the technical requirement for justification ultimately hinders the technology’s capabilities. Watching how this plays out over the next decade will be a fascinating glimpse into the development of new technology and complexities of innovation.
Explainable AI Written by Alex Sheen & Edited by Rachel Weissman
Snow, Jackie. (2018, March 8). A new AI system can explain itself—twice. Retrieved March 6, 2019, from https://www.technologyreview.com/the-download/610447/a-new-ai-system-can-explain-itself-twi ce/
Park, D. H., Hendricks, L. A., Akata, Z., Schiele, B., Darrell, T., & Rohrbach, M. (2018, February 15). Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. Retrieved March 6, 2019, from https://arxiv.org/abs/1802.08129