Reliable Transfer Learning for Quantum Phase Transitions

POSTER

Abstract

Predicting physical behavior beyond the data distribution is common in condensed matter physics but risky without guarantees. We study extrapolation of the mean-field free energy and critical temperature in a simplified Ising model using (i) Gaussian Processes (GPs), (ii) multivariate polynomial regression, and (iii) neural networks (MLPs). Building on recent transfer results for low-degree polynomials \citep{Kalavasis2024}, we compute explicit extrapolation bounds when training and test regions are disjoint in temperature. Empirically, degree-d polynomials trained only on the high-temperature phase extrapolate the free energy accurately into the low-temperature phase, with errors well below theoretical limits despite the bounds scaling as dd. GPs provide the strongest accuracy but limited interpretability, and MLPs fit the training domain well yet degrade out-of-domain. Extending this analysis, we connect Carbery–Wright anti-concentration and Remez-type inequalities to more tightly bound how polynomial residuals can spread under domain shift, yielding refined L1/L2 extrapolation guarantees. These results indicate that low-degree polynomial predictors—unlike expressive black-box models—admit mathematically controlled transfer across phases. Our broader goal is to develop a unified framework for reliable transfer learning in physics: establishing provably small extrapolation errors for low-complexity models, validating these bounds across canonical phase-transition systems, and formulating a practical algorithmic recipe for experimentalists to extrapolate physical laws beyond observed regimes.

Presenters

  • Jeffrey Wei

    Yale University

Authors

  • Jeffrey Wei

    Yale University