Universal Quantum Control through Deep Reinforcement Learning
ORAL
Abstract
We discover in this work that deep reinforcement learning (RL) techniques are capable of solving complex multi-qubit quantum control problems robustly against control errors. We propose a control framework to jointly optimize over stochastic control errors and facilitates time-dependent controls over all independent single-qubit Hamiltonians and two-qubit Hamiltonians, thus achieving full controllability for any two-qubit gate. As an essential ingredient, we derive an analytic leakage bound for a Hamiltonian control trajectory to account for both on- and off-resonant leakage errors. We utilize a continuous-variable policy-gradient RL agent consisting of two-neural networks to find highest-reward/minimum-cost analog controls for a variety of two-qubit unitary gates crucial for quantum simulation. We achieve up to a one-order-of-magnitude of improvement in gate time over the optimal gate synthesis approach based on the best known experimental gate parameters in superconducting qubits, an order of magnitude reduction in fidelity variance over solutions from both the noise-free RL counterpart and a baseline SGD method, and two orders of magnitude reduction in average infidelity over control solutions from the SGD method.
–
Presenters
-
Murphy Yuezhen Niu
Massachusetts Institute of Technology, Physics, Masachusetts Institute of Technology, Physics, MIT
Authors
-
Murphy Yuezhen Niu
Massachusetts Institute of Technology, Physics, Masachusetts Institute of Technology, Physics, MIT
-
Vadim Smelyanskiy
Google Inc., Quantum A. I. Laboratory, Google
-
Sergio Boixo
Google Inc., Quantum A. I. Laboratory, Google
-
Hartmut Neven
Google Inc., Quantum A. I. Laboratory, Google, Google