A Simple Statistical Model for Randomized Benchmarking Data

ORAL

Abstract

Randomized Benchmarking (RB) techniques are often used to probe the capabilities of quantum processors. RB consists of running random circuits with varied circuit depths; the mathematical theory underpinning RB shows that the mean success probability of RB circuits decay exponentially in the depth of the circuit. However, RB theory makes no claims about the distribution of success probabilities for RB circuits at a given depth. This makes statistically rigorous analysis of RB data difficult; as a result, RB data is typically analyzed with ad hoc curve-fitting. In our work, we solve this problem with a simple statistical model for RB data. We model RB success probabilities with a Beta distribution with mean that decays exponentially in circuit depth and variance that is modelled by a few-parameter ansatz function. This few-parameter statistical model enables maximum likelihood estimation (MLE) of RB error rates with rigorous confidence intervals. Our analysis is easily adaptable to a plethora of RB techniques—including standard/Clifford-group RB, direct RB, binary RB, and (a slight adaptation of) mirror RB Consequently, we demonstrate how it can enable ‘super-efficient’ RB of many-qubit processors.



SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.

Presenters

  • Samyak P Surti

    University of California, Berkeley

Authors

  • Samyak P Surti

    University of California, Berkeley

  • Jordan Hines

    University of California, Berkeley

  • Daniel Hothem

    Sandia National Laboratories

  • Timothy J Proctor

    Sandia National Laboratories