Self-Learning Monte Carlo Method with Equivariant Neural Networks: Progress in Electron-phonon Hamiltonians
ORAL
Abstract
The Self-Learning Monte Carlo (SLMC) method enhances Monte Carlo simulations by substituting costly configuration update evaluations with a scalable, machine-learned proxy. This model is trained on thousands of trial updates from standard Monte Carlo simulations on a smaller lattice and then applied to larger lattices for more efficient simulations. However, certain aspects of this approach, such as extrapolation accuracy to larger systems, autocorrelation inheritance, training data locality assumptions, and symmetry enforcement remain less explored. In this work, we delve into these challenges using an electron-phonon Hamiltonian and determinant quantum Monte Carlo as test cases. We emphasize the superior performance of equivariant neural networks over previous learning models and make extrapolation comparisons on the largest lattices we can simulate. Drawing inspiration from the molecular dynamics community, we also compare the performance of SLMC with force-based updates to more traditional energy-based updates.
* This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award Number DE-SC0022311.
–
Presenters
-
Philip M Dee
University of Tennessee
Authors
-
Philip M Dee
University of Tennessee
-
Benjamin Cohen-Stead
University of Tennessee Knoxville
-
Steven S Johnston
University of Tennessee