Classical algorithm for simulating experimental Gaussian boson sampling
ORAL
Abstract
Gaussian boson sampling is a promising candidate for demonstrating experimental quantum advantage, and there have been many experiments to perform large-scale Gaussian boson sampling to claim the quantum advantage. Even though they suffer from a large amount of photon loss, the experiments claim a tremendous computational advantage (at least 100 years) over the state-of-the-art supercomputer. In this talk, we present a new classical tensor-network algorithm that takes advantage of a large loss rate to significantly reduce the computational cost to simulate the experimental Gaussian boson sampling. The main observation is that when photon loss occurs, many of the input photons become thermalized so that they can be efficiently simulated. Based on the observation, we construct a classical algorithm using a newly proposed quantum-optical decomposition of lossy Gaussian states and the matrix product state method. Using the algorithm, we simulate the state-of-the-art Gaussian boson sampling experiments, generate 10 million samples in around 1 hour using up to 288 GPUs, and demonstrate that our classical sampler outperforms the experimental Gaussian boson sampler. We also analytically prove the scaling of our classical algorithm as the system size grows. The details can be found in Ref. [1].
[1] Changhun Oh, Minzhao Liu, Yuri Alexeev, Bill Fefferman, Liang Jiang, arXiv:2306.03709 (2023)
[1] Changhun Oh, Minzhao Liu, Yuri Alexeev, Bill Fefferman, Liang Jiang, arXiv:2306.03709 (2023)
–
Publication: Changhun Oh, Minzhao Liu, Yuri Alexeev, Bill Fefferman, Liang Jiang, Tensor network algorithm for simulating experimental Gaussian boson sampling, arXiv:2306.03709 (2023)
Presenters
-
Changhun Oh
University of Chicago
Authors
-
Changhun Oh
University of Chicago