Explainable AI in Finance

Explainable AI in Finance Artificial Intelligence models, specifically deep learning ones, are typically seen as a black box where the reasoning behind decision making is often unknown. This leaves room for bias, there leaves room for the establishment of bias, which if not corrected will continue to grow.
As artificial intelligence is adopted by automated trading systems, automated peer-to-peer lending, and other impactful aspects of financial ecosystems, the potential implications of mistakes become more detrimental. Because of this, explainable artificial intelligence (XAI), has been deemed a potential method of harnessing the benefits of AI. Furthermore XAI includes decision traceability, to be able to understand reasoning for error, and to implement risk mitigation techniques for performance improvement and regulation establishment.
Reinforcement Learning and Explainable Reinforcement Learning
Currently implemented reinforcement learning (RL) gives freedom to a system in the pathways it takes to a solution. These multiple paths are then assessed based on quantitative attributes to see which was best able to complete a task. The proposed solutions to this include morphing the abilities of RL with transparency. This solution is referred to as explainable reinforcement learning (XRL).
XRL is divided into two potential solutions, which include algorithmic transparency and interpretability, and post decision-making reasoning. These solutions have been studied extensively in academic circles. However, these solutions must be translatable to those who are not experts in the field because adoption by the common user should be regulated by governing bodies. These regulatory agencies therefore should be able to understand the implications of said systems.
One proposed method of explaining reinforcement learning is state representation learning (SRL). State representation learning is a method of feature learning that reduces dimensionality in order to make it easier for RL systems to understand complex systems. SRL may be helpful in understanding why a decision is made and the steps used to derive this decision [1].
In some cases explainability is required when backtracking the process an RL system has completed in order to check for potential aspects that may have gone wrong. One method suggests that while an agent is interacting with the environment, statistics are gathered that mention different portions of the interaction. The proposed introspection framework would highlight statistics that are especially interesting and influential in the decision-making process [2].
Although a majority of the methods utilized in XRL are focused on explaining image recognition decisions and robotics uses, these are not the only places where they could be implemented. Artificial intelligence has become important in quantitative trading systems, lending, and other financial interactions.
For this reason, explaining the reasoning behind AI is highly important. Pending potential progress, things like peer-to-peer lending, assignments of risk, and interest rates could become more transparent. Explainable AI could also be useful in mitigating repetitive interactions such as mortgage lending and refinancing without fear of a potential bias towards specific demographics, as has often been the worry in an AI-centric banking system.
The increased need for transparency in algorithms comes from the distrust formed as a result of the compounded effects of the 2010 Flash Crash. Although the triggering effect of the crash was never discovered by the Securities and Exchange Commission, it is known that high-frequency trading models amplified the effects by panic selling. The ability to understand what algorithms do this helped improve regulation in the space. The crash led to a circuit breaker that kicked in at a 7% loss, rather than the original 10%.
And as a result, many young investors avoid the stock market. Looking for other ways to find yield. From crypto, to peer-to-peer lending.
Explainable AI in Finance Written by Hakil Haxhiu
Edited by Luca Vernhes, Jack Argiro, Hantong Wu, Benjamin Binday, Calvin Ma, Jay Devon & Lika Mikhelashvili
Reinforcement Deep Learning vs. Deep Learning
Citations
- Heuillet, Alexandre, Fabien Couthouis, and Natalia Díaz-Rodríguez. “Explainability in deep reinforcement learning.” Knowledge-Based Systems 214 (2021): 106685.
- Sequeira, Pedro, and Melinda Gervasio. “Interestingness elements for explainable reinforcement learning: Understanding agents’ capabilities and limitations.” Artificial Intelligence 288 (2020): 103367.