Search
Close this search box.
Search
Close this search box.

How do you calculate latent variables?

How do you calculate latent variables?

Math

Deep Learning God Yann LeCun. Facebook / Meta’s Director of Artificial Intelligence & Courant Prof.
A new paper by Yacine Ait-Sahalia, Chenxu Li and Chen Xu Li offers a cute way to estimate latent variable models.

The method does not lean on tractable distributions. But rather, on a collection of basis functions that are used to approximate the transition density. I suspect there are many people in my feed who will be truly interested in the mathematical technique – and also many who will be truly uninterested (but read on).

The pertinent question: will this new algorithm find its way to business problems any time soon, or will it linger in obscurity? I optimistically predict the former, because “it” is likely to join some other non-gaussian filters that already patrol the streams (Search for Streams (microprediction.org) at Microprediction.Org, and so anyone with a relevant business problem will benefit from it sooner or later.

Perhaps you would like to be the one to set this new algorithm on its path to destiny? See Submitting predictions | microprediction for instructions. There’s $50,000 a year in prize-money to add to the scientific incentive.

The streams your algorithm will attack include plenty of things whose volatilities and regimes are latent (examples: Stream Dashboard (microprediction.org) and also some whose forecast volatilities are themselves, as of yesterday, also the subject of prediction (see Stream Dashboard (microprediction.org) if you need convincing that volatility is its own animal worthy of competitive prediction).

But how fortunate that only one person needs to figure all this out. So the rest of us can take advantage of it. To benefit, you merely need to publish data, and look how easy that is (microprediction/create_a_stream.py at master · microprediction/microprediction · GitHub). Certainly, a lot easier than implementing and benchmarking every new approach that becomes devised, forever.

You’ll probably end up benefiting from a constantly changing ensemble, not one “best” model, btw, even if this paper is the be-all and end-all.

Why? The paper proposes a technique. It’s a tool. It can become used and implemented in different ways. Even if it was *the* key tool, for the sake of argument. It would still be the case that a combination of algorithms authored by different people using this tool will outperform any given one.

Now I grant you that the use of high velocity algorithm-friendly continuous lotteries for better-than-SOTA continuously improving ensembles is a still a bit of a brain explosion (and a threat) for some academics, some practitioners, and even some data science influencers seeking to add meta-guidance value.

But you don’t need their blessing or opinion. Nonsense-free prediction merely requires publishing the data.

Microprediction: Building an Open AI Network

Michael Littman on Deep Learning VS Reinforcement Learning. Machine Learning & Markov

How do you calculate latent variables?