Towards Artificial Intelligent Reasoning

Towards Artificial Intelligent Reasoning

Towards Artificial Intelligent Reasoning 

Humans are known to use both intuition and reasoning in their decision-making process.

Intuition is associated with making decisions with the highest expected reward. An expectation based purely on past experiences (i.e. model-free reinforcement learning). Reasoning is also associated with making decisions with the highest expected reward. An expectation this time based on representations of the world used to predict outcomes and infer rewards that may or may not have been directly experienced in the past (i.e. model-based reinforcement learning).

Both pathways compete for decision making at any given time. And for any given task. Which one dominates is mediated by our own experiences of their relative utility. Where utility is a trade-off between rewards derived from making good decisions and complexity of making such decisions. A key benefit of this architecture is that it allows us to quickly and flexibly respond to unexpected changes in rewards and/or transition probabilities, e.g. when our intuition is no longer valid.

As we collaborate with others, this same mediation process further helps us compare and prioritize endogenous and exogenous learnings in order to maximize the utility of decisions based on our collective knowledge.

This architecture provides a blueprint for how to implement collaborative and cooperative learning and reasoning among computer agents.

Whereby each agent consists of a model-free and/or a model-based reinforcement learning process as well as an uncertainty-based arbitration process that continuously evaluates their relative precision and accuracy. The latter  determines which learning process should prevail in driving both individual and collective decision making.

The mediation process is in essence a multi-objective optimization process.

At the individual level, the optimum solution space consists of a pareto-optimum frontier whereby when computational resources and latency are non-issues, model-based reasoning always dominates. Whereas when computational resources are scarce and speed is critical, model-free intuition dominates (e.g. fight or flight response).

Extending this concept to multiple agents, a solution space can be defined by continuously weighting the utility of each agent. This is based on the “return” and “variance” of their learning models and a pareto-optimum frontier can be derived, much like in an optimally diversified stock portfolio.

In practice, ensemble techniques are commonly used as a mediation process, which largely amounts to sampling and weighting learnings across multiple agents.

While an interesting step forward, this implementation falls short of truly artificial collaborative learning and reasoning because of one key limitation that is the need to establish a shared concept of the problem and the goal at hand across agents.

For an individual agent, that is typically a given. But for a collection of agents, that shared conception has to be constructed and maintained by human intervention. As a result, such attempts have generally focused on near-identical computer agents trained on a common dataset, resulting in a great lack of learning diversity and therefore few opportunities for unexpected value generation.

What has made human civilization so powerful and successful?

It is largely rooted in our ability as humans to work together.

Humans tackle complex tasks despite our own backgrounds and individualities.

See the source image

It is the very fact that we all experience life differently that is often the catalyst to innovative problem solving. Our ability to self-generate a shared conception of a problem and a goal is made possible by three key elements: a common sense, a common language, and a common logic.

A common sense is what allows us to perceive, analyze, and judge reality consistently across individuals and thus to trust each other’s experiences. 

Moreover, a common language is what allows us to share information and transfer learning about what we know from our own experiences to be associated with certain outcomes.

A common logic is what allows us to integrate learnings from multiple individuals into a cohesive learning model and holistic decision-making process. This includes fair arbitration when individual learnings overlap or conflict with one another.

Interestingly, all three elements are also required when engaging in a natural language conversation, which Turing famously selected as the ultimate test of artificial intelligence.

With these prerequisites in place, it now becomes possible to architect a mediation process that does more than simply weight the recommendations of multiple computer agents and rather automatically combines and hybridizes individual model-free and model-based learnings into a single richer and more powerful decision-making model. A model that is shared across agents, paving the way towards artificial intelligent reasoning.

Many AI leaders, from Michael Jordan to Judea Pearl, have argued that the one essential element missing in machine learning today that could enable a common sense, a common language, and a common logic simultaneously is causality.

To understand why, it is worth revisiting the framework of causal learning.

While most of machine learning today aims to find patterns and interactions between variables that are primarily correlational in nature, causal learning aims to quantify cause and effect interactions between free-to-vary variables and outcome variables.

This classification of the variable types provides a first set of common rules for agents to probe and sense the world in a consistent manner.

In addition, causal learning theory provides a second set of common rules for analyzing data and, specifically, for computing marginal probabilities or expectation values that can be shared across agents with high external validity. This is unlike conditional probabilities or expectation values generated by agents that use non-causal learning models. These are generally biased and confounded by the specific data set that each agent was trained on.

In other words. Causal learning generates knowledge that transcends individual agent experiences. As such, it can be aggregated into an abstracted and trusted knowledge repository. One that is shared and interpretable across agents, effectively defining a common language between them.

Finally, causal learning theory provides a common set of tools to quantify and report both the magnitude and the uncertainty around expected outcomes / effects, which allows for fair mediation and rational conflict resolution between agents based on consistent metrics about the quality (precision and accuracy) of their respective learning. The same metrics can further support risk-adjusted optimization of collective decision making.

This common learning framework then allows agents to collaborate, i.e. share their individual knowledge of a common problem to find a better collective optimum, cooperate, i.e. share their individual knowledge of subsets of a problem to find a better global optimum, reason, i.e. share their individual knowledge of distinct problems to solve a more complex problem, and quite possibly innovate together, i.e. apply their individual knowledge of various problem spaces to solve a seemingly unrelated problem.

So, what’s missing?

Both Judea Pearl and Michael Jordan have argued that successfully implementing such a framework is years, if not decades, out.

Arguably, that is less because of theoretical or technical difficulties and more because of a much-needed mindset shift in the field of AI.

Parallel to the 4th industrial revolution and the digitization of everything, the field of neuroscience has seen tremendous development in techniques to monitor and image brain activity, from electroencephalogram to MRIs.

Neuroscientists however are unlike many data scientists. Mainstream data scientists have been building models from the mountains of data being passively generated by IoT. In doing so, these scientists have been treating the data as ground truth about the underlying system or process they are trying to model.

In contrast, neuroscientists have been using these new sensing technologies as tools, in combination with proven experimental techniques, to uncover the ground truth about how our brains operate. Their approach consists in actively probing the response of the human brain to a variety of external stimuli. And, in doing so, they have made tremendous progress toward mapping areas of functional specialization in the brain or understanding image processing by the human vision system.

No neuroscientist would think of passively collecting mountains of data from brain waves and attempting to train a neural network on that data in order to generate a data-driven model or “digital twin” of the human brain. Even if enough data were available today, the resulting model would provide little insight into how the brain actually works due to a lack of interpretability.Worst, it would almost surely result in a model that is fraught with bias and confounds.

The reason behind it is quite simple. Randomization is the most fundamental mechanism for generating bias-free causal learning. Whereas, in passively collected data, decisions or stimuli are hardly ever randomized.

This is largely why previous attempts at applying machine learning to critical applications like healthcare have largely failed.

Much of the data community considers active machine-learning techniques as being too inefficient. This is due to their intrinsic need to accumulate data over time rather than leverage existing data. However active machine learning provides a powerful foundation for driving computer-generated data-driven causal models. This is provided digitization is used as a tool to collect data in accordance with the requirements of causal learning theory rather than an end in itself.

Besides, active learning techniques need an amount of data to generate such models that is in fact orders of magnitude smaller than what is needed for passive learning techniques. Because it is collected in ways that deliver orders of magnitude greater statistical power.

One uses passive learning to accelerate learning. Provided the datasets used have been collected with some level of randomization (e.g. past DOEs).

Emerging new algorithms in Causal Reinforcement Learning, Deep Reinforcement Learning and Deep Causal Learning are paving the way forward. A future towards computer agents that are capable of causal reasoning, both individually and collectively. This is likely to deliver a true step change. A change in both the performance and applicability of Artificial Intelligence in our everyday lives.

Displaying Profile_Photo.jpg

Towards Artificial Intelligent Reasoning Written by Gilles Benoit

(1) Hubble Space Telescope Director & Max Planck Institute for Astronomy Director Dr. Steven Beckwith.

Towards Artificial Intelligent Reasoning