Causal reasoning is the process of finding the connection between cause and effect. It is a fundamental logic for a human’s thinking process and understanding of the world.
Causal reasoning helped us gain superior intelligence and adopt an environment like no other species achieved. For example, when people observe trees catching on fire after hitting by lighting, people start to connect between two phenomena. Eventually, people learned where to find fire by this causal relationship.
Causal reasoning is so important to humans that people even invent causal relationships if they can’t find a factual cause. Before physicists explore how lightning forms with the scientific method, many civilizations created the methodology character “God of Thunder.” Causal reasoning didn’t only help humans form common sense but also is a powerful science tool.
Function and regression are both based on a causal relationship. X change would cause Y to change on the other side of the equation. There is a causal relationship between the dependent variable and the independent variable. Surprisingly, most AI programs don’t contain causal reasoning logic. Many scholars argue that AI technology will hit a bottleneck and can’t develop unless integrating causal reasoning.
Before we learn how causal reasoning will change AI technology, we need first to understand how current AI programs work. Most modern AI programs are designed on correlation inference. That means the program gets smart by finding correlations between variables without understanding the causal relationship between them. Correlation inference gained huge success in past decades, given the burst of potent computer hardware.
More storage and faster processes boost AI.
For example, Google is one of the top companies that researches automatic driving cars. Google engineers developed a program that can direct the car stop before traffic lights. Engineers first collect millions of photos and videos containing traffic lights, then feed those data to the program. The more data to a program, the more accurate the prediction will be. The program analysis every new photo and adjust the previous algorithm. The optimized program establishes the correlation between features of the traffic lights and the desired operation. This is how most machine learning programs work.
The future of AI
Although AI technology achieved many remarkable tasks, some scholars keep a pessimistic attitude about AI’s future without causal reasoning. Turing Award winner Yoshua Bengio expressed his opinion in Wired Magazine: He believes it won’t realize its full potential and won’t deliver a true AI revolution until it can go beyond pattern recognition and learn more about cause and effect. In other words, he says, deep learning needs to start asking why things happen.
Yosha believes AI’s drawback with correlation inference is only working efficiently when the training data overlay with the actual situation. In many cases, it’s not true.
If All programs developed in the lab with controlled facts, it could cause a problem called “catastrophic forgetting.”. That means the AI program’s prediction accuracy turns lower than expected when the training task doesn’t overlay with the testing task. This could cause critical system failure and serious consequences because the AI program doesn’t understand traffic lights’ meaning. It could recognize a green balloon as a sign to go. Bengio is now working on practicing this theory. He developed a deep learning prototype that could learn simple causal relationships.
Another Turing Award winner recognizes the potential of causal reasoning too. Judea Pearl, professor of computer science at UCLA, lists these opinions in the Book of Why: The New Science of Cause and Effect. He thinks the current AI program is “curve fitting.” Association method limited machines get smarter.
If engineers want AI to think in the same way as humans and understand each other, causal reasoning is the correct path to take. Pearl contributed to probabilistic AI with Bayesian Networks. Pearl’s goal is to develop a predictive or diagnostic program in the early 1980s. An automatic system that assists professionals to solve complex tasks. Now, Pearl thinks the AI is wasting the calculation power on fitting curves. If engineers want to design an AI program with human-level intelligence, causal reasoning is the next breakthrough.
Challenge and doubt
Although causal reasoning is gaining popularity rapidly, engineers still need to overcome many challenges before integrating causal reason with AI dynamically. Pearl’s research paper: The Seven Tools of Causal Inference With reflections on Marching learning, he concludes that the current research about causal reasoning models is conceptual.
The accuracy of all inference depends critically on the veracity of the assumed structure. If the structure differs from the one assumed, substantial errors may result.
Written by Frank Li
Edited by Alejandro Ortega & Alexander Fleiss