Dynamical phase transition in quantum neural networks with large depth
ORAL
Abstract
In quantum machine learning theory, representation learning is captured by the dynamical evolution of quantum neural tangent kernels during gradient descent dynamics. This relates to the interplay between the number of training parameters, the dimension of the Hilbert space, and, notably, the loss function constructed for optimization tasks. In this work, we identify a dynamical phase transition in the training dynamics of quantum neural networks with large depth. When the target energy is within the bulk of cost-function spectrum, the neural tangent kernel is frozen, and the cost function exponentially decays with the training steps. When the target energy is right at the ground state of the cost function, the training dynamics experience a critical phenomena, where both the neural tangent kernel and the residual error decay polynomially with training steps. While when targeting below the ground state energy, the residual error decays again exponentially, while the neural tangent kernel also experiences exponential decay, in contrast to the frozen-kernel dynamics above the ground state energy. We connect the dynamical phase transition to the closing of gap of the Hessian matrix, which can be considered as the Hamiltonian that governs the neural network dynamics under imaginary time evolution. We provide a non-perturbative analytical theory to explain the phase transition via a constrained Haar ensemble at late time, when the final state approaches the ground state.
* NSF, ONR
–
Presenters
-
Bingzhi Zhang
University of Southern California
Authors
-
Bingzhi Zhang
University of Southern California
-
Junyu Liu
University of Chicago
-
Liang Jiang
University of Chicago
-
Quntao Zhuang
University of Southern California