Q-Learning Reinforcement Learning

Q-Learning Reinforcement Learning : In The Black-Scholes (Merton) Worlds

See the source image

Q-Learning Reinforcement Learning. This paper presents a discrete-time option pricing model. One that is rooted in Reinforcement Learning (RL), and more specifically in the famous Q-Learning method of RL.

We construct a risk-adjusted Markov Decision Process for a discrete-time version of the classical Black-Scholes-Merton (BSM) model.

One where the option price is an optimal Q-function, while the optimal hedge is a second argument of this optimal Q-function, so that both the price and hedge are parts of the same formula.

Pricing is done by learning to dynamically optimize risk-adjusted returns for an option replicating portfolio, as in the Markowitz portfolio theory.

See the source image
https://youtube.com/watch?v=F5CAhdAdxa0%3Fstart%3D15%26feature%3Doembed
Did Finance Oversleep a Century of Development in Physics?

Using Q-Learning and related methods, once created in a parametric setting, the model is able to go model-free and learn to price and hedge an option directly from data generated from a dynamic replicating portfolio which is rebalanced at discrete times.

If the world is according to BSM. Then our risk-averse Q-Learner converges, given enough training data, to the true BSM price and hedge ratio of the option in the continuous time limit.

This is even if hedges applied at the stage of data generation are completely random. (i.e. it can learn the BSM model itself, too!) Because Q-Learning is an off-policy algorithm.

If the world is different from a BSM world, the Q-Learner will find it out as well, because Q-Learning is a model-free algorithm.

See the source image

For finite time steps, the Q-Learner is able to efficiently calculate both the optimal hedge and optimal price for the option. This is directly from trading data, and without an explicit model of the world.

This suggests that RL may provide efficient data-driven and model-free methods.

Something we can use for optimal pricing and hedging of options. Once we depart from the academic continuous-time limit, and vice versa. One may view option pricing methods developed in Mathematical Finance as special cases of model-based Reinforcement Learning.

In conclusion, due to the simplicity and tractability of our model. Which only needs basic linear algebra (plus Monte Carlo simulation, if we work with synthetic data). And given its close relation to the original BSM model.

We suggest that one uses our model for the benchmarking of different RL algorithms. Specifically, for financial trading applications.

Read The Full Paper

Written by Professor Igor Halperin