A Universal Training Algorithm for Quantum Deep Learning
ORAL
Abstract
Quantum variational algorithms have seen a recent surge of interest, yet their connection to classical deep neural networks has so far remained elusive. In this talk, we will establish how to port over classical neural networks as quantum parametric circuits, and we will further introduce a quantum-native backpropagation principle which can be leveraged to train any quantum parametric network. We will present two main quantum optimizers leveraging this quantum backpropagation principle: Quantum Dynamical Descent (QDD), which uses quantum-coherent dynamics to optimize network parameters, and Momentum Measurement Gradient Descent (MoMGrad), which is a quantum-classical analogue of QDD. We will briefly cover multiple applications of QDD/MoMGrad to various problems of quantum information learning, and how to use these optimizers to train classical neural networks in a quantum fashion. Furthermore, we will show how to efficiently train hybrid networks comprised of classical neural networks and quantum parametric circuits, running on classical and quantum processing units, respectively.
Talk based on [arXiv:1806.09729].
Talk based on [arXiv:1806.09729].
–
Presenters
-
Guillaume Verdon
Institute for Quantum Computing
Authors
-
Guillaume Verdon
Institute for Quantum Computing
-
Jason Pye
Institute for Quantum Computing
-
Michael Broughton
School of Computer Science, University of Waterloo