Reinforcement Learning for Quantum Control

ORAL

Abstract

Quantum control steers controllable degrees of freedom of a quantum system such that its dynamics follows desired behavior with respect to chosen observables. We map quantum control problems into a reinforcement learning (RL) framework [P Palittapongarnpim, P Wittek, E Zahedinejad, S Vedaie, B C Sanders, Neurocomputing 268 (2017) 116-126], which eschews model-based optimization and instead searches for operating parameters based on trial and experience. The RL approach is especially valuable when models are not trusted, which is likely the norm for complex quantum systems arising, e.g., in scalable quantum computing impementations.

We link control theory of system, control, bath, and universe to RL notions of agent, environment, action, state, and reward and pay careful attention to the subtleties of the quantum mechanics context. The RL state requires care as our information pipeline between agent and environment involves only classical information. Thus, the quantum state is meaningful only within the RL environment, and the classical RL state corresponds to a string representing the outcome of a finite number of measurements conducted within the environment. We illustrate our approach for adaptive quantum-enhanced metrology and for a two-qubit gate in an ion trap.

Presenters

  • Barry Sanders

    Institute for Quantum Science and Technology, University of Calgary, Physics and Astronomy, University of Calgary

Authors

  • Barry Sanders

    Institute for Quantum Science and Technology, University of Calgary, Physics and Astronomy, University of Calgary

  • Pantita Palittapongarnpim

    Institute for Quantum Science and Technology, University of Calgary

  • Shakib Vedaie

    Institute for Quantum Science and Technology, University of Calgary