"Perfect crime" of machine-learning potentials: 100-fold speed-up with no detectable trace of using machine learning in the final result
ORAL
Abstract
Machine-learning interatomic potentials have been showing significant progress in accelerating atomistic modeling while preserving near DFT accuracy. To make use of such potentials, one must prepare a training dataset of atomistic configurations evaluated with DFT. This can be automated by active learning, allowing one to develop algorithms for automatically predicting materials properties with near-DFT accuracy with speedups of a few orders of magnitude. The only downside of such algorithms is a numerical error in the final answer arising from the deviation of a machine-learning potential from DFT. In my talk, I will show that, in some applications, one could develop algorithms that are free even from that numerical error. For the application of obtaining thermodynamically stable ternary alloy structures I will present an algorithm for screening out high-energy structures, thus accelerating a baseline DFT-based high-throughput algorithm by a factor of 100, leaving zero error in the final answer when compared to DFT. This alludes to the "perfect crime": machine-learning potentials offer very large speed-ups, but the final result is indistinguishable from the one obtained by pure DFT.
–
Presenters
-
Alexander Shapeev
Skolkovo Institute of Science and Technology
Authors
-
Konstantin Gubaev
Skolkovo Institute of Science and Technology
-
Evgeny Podryabinkin
Skolkovo Institute of Science and Technology
-
Gus Hart
Brigham Young University, Physics and Astronomy, Brigham Young University
-
Alexander Shapeev
Skolkovo Institute of Science and Technology