Reduction of finite sampling noise in quantum neural networks
ORAL
Abstract
Quantum machine learning utilize the power of quantum computers to enhance machine learning models. An important method are quantum neural networks (QNNs) which utilize parameterized quantum circuits with data dependent gates and generate outputs through the evaluation of expectation values. Evaluating expectation values, however, introduce fundamental finite-sampling noise that impede training and inference of QNNs.
In this talk, we present a new technique to mitigate the finite-sampling noise in QNNs. This method leverages the expressive power of QNNs to additionally reduce the variance of the expectation values during the training. We achieve this by incorporating the variance of the QNN output into the loss function of the training. Notably, this technique requires no additional circuit evaluations when the QNN is properly constructed.
Our results demonstrate a substantial reduction in finite-sampling noise by an order of magnitude, resulting in significantly less noisy outputs. We also propose an optimization procedure that places a priority on minimizing variance early in the optimization process and dynamically adjusts the number of shots required for gradient evaluations. This procedure reveals a remarkable acceleration in training time, mitigated output noise, and improved QNN predictions.
Finally, we illustrate the training of a larger QNN on an actual quantum device. The optimization here is only feasible thanks to the diminished number of shots from the reduced variance.
In this talk, we present a new technique to mitigate the finite-sampling noise in QNNs. This method leverages the expressive power of QNNs to additionally reduce the variance of the expectation values during the training. We achieve this by incorporating the variance of the QNN output into the loss function of the training. Notably, this technique requires no additional circuit evaluations when the QNN is properly constructed.
Our results demonstrate a substantial reduction in finite-sampling noise by an order of magnitude, resulting in significantly less noisy outputs. We also propose an optimization procedure that places a priority on minimizing variance early in the optimization process and dynamically adjusts the number of shots required for gradient evaluations. This procedure reveals a remarkable acceleration in training time, mitigated output noise, and improved QNN predictions.
Finally, we illustrate the training of a larger QNN on an actual quantum device. The optimization here is only feasible thanks to the diminished number of shots from the reduced variance.
* This work was supported by the German Federal Ministry of Education and Research through the project H2Giga-DegradEL3 (grant no. 03HY110D).
–
Publication: Kreplin, David A., and Marco Roth. "Reduction of finite sampling noise in quantum neural networks." arXiv preprint arXiv:2306.01639 (2023).
Presenters
-
David Kreplin
Fraunhofer IPA
Authors
-
David Kreplin
Fraunhofer IPA
-
Marco Roth
Fraunhofer IPA