Dynamical mean field theory of learning
ORAL · Invited
Abstract
Training algorithms are key to the success of artificial and recurrent neural networks. However we know very little about them. The main example of these algorithms is stochastic gradient descent which is the workhorse of the deep learning technology. I will review how dynamical mean field theory can be used to gain some understanding of training algorithms in prototypical settings.
–
Presenters
-
Pierfrancesco Urbani
CNRS, Institute of theoretical physics, CEA, Université Paris-Saclay
Authors
-
Pierfrancesco Urbani
CNRS, Institute of theoretical physics, CEA, Université Paris-Saclay