Accurate real-time feedback quantum control with reinforcement learning
ORAL
Abstract
Reinforcement learning (RL) has been used in recent years to achieve quantum control in complex and counterintuitive nonlinear problems. However, continuous measurement-based feedback control (MBFC) faces a major challenge due to measurement noise, which makes it difficult to accurately and quickly train RL agents and achieve accurate control over noisy measurement data[1]. Here we present a method for real-time stochastic state estimation that overcomes this hurdle and enables noise-resistant tracking of the conditional dynamics, including the full density matrix of the quantum system[2]. This facilitates a faster training process and accurate discovery of control strategies for the RL agent based on any conditional observable means, including the full conditional density matrix, which is usually not readily and accurately determined in practical real-time experiments.
[1] S. Borah, B. Sarma, M. Kewming, G. Milburn and J. Twamley, Phys. Rev. Lett. 127, 190403 (2021)
[2] S. Borah and B. Sarma, https://arxiv.org/abs/2301.07254
[1] S. Borah, B. Sarma, M. Kewming, G. Milburn and J. Twamley, Phys. Rev. Lett. 127, 190403 (2021)
[2] S. Borah and B. Sarma, https://arxiv.org/abs/2301.07254
–
Publication: Sangkha Borah, Bijita Sarma, 'No-Collapse Accurate Quantum Feedback Control via Conditional State Tomography', arXiv preprint arXiv:2301.07254
Presenters
-
Sangkha Borah
Friedrich-Alexander University Erlangen, Max Planck Institute for the Science of Light, Max Planck Institute for Science of Light, Friedrich-Alexander-Universität Erlangen-Nürnberg
Authors
-
Sangkha Borah
Friedrich-Alexander University Erlangen, Max Planck Institute for the Science of Light, Max Planck Institute for Science of Light, Friedrich-Alexander-Universität Erlangen-Nürnberg
-
Bijita Sarma
Friedrich Alexander University Erlangen-Nuremberg, Friedrich-Alexander-Universität Erlangen-Nürnberg