Theoretical Physicist Dr. Charles Martin on the Future of Deep Learning

Future of Deep Learning

The Future of Object Detection

Accenture’s Chief Data Scientist on Deep Reinforcement Learning

Conversation With Deep Learning God Yann LeCun

Dr. Charles Martin is Chief Scientist of Calculation Consulting & scientific advisor to the Anthropocene Institute, assisting with general efforts in Covid-19 Pandemic response. Also helping to vett and provide due diligence for the Page family interests and investments in general physics and chemistry, including modern nuclear and quantum technologies.

Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervisedsemi-supervised or unsupervised.[1][2][3]

Deep-learning architectures such as deep neural networksdeep belief networksrecurrent neural networks and convolutional neural networks have been applied to fields including computer visionmachine visionspeech recognitionnatural language processingaudio recognition, social network filtering, machine translationbioinformaticsdrug designmedical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.[4][5][6][7]

Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analogue.[8][9][10]

The adjective “deep” in deep learning refers to the use of multiple layers in the network. Early work showed that a linear perceptron cannot be a universal classifier, and then that a network with a nonpolynomial activation function with one hidden layer of unbounded width can on the other hand so be. Deep learning is a modern variation which is concerned with an unbounded number of layers of bounded size, which permits practical application and optimized implementation, while retaining theoretical universality under mild conditions. In deep learning the layers are also permitted to be heterogeneous and to deviate widely from biologically informed connectionist models, for the sake of efficiency, trainability and understandability, whence the “structured” part.