Machinery representation of physics models via structured self-attention network
ORAL
Abstract
Recently machine learning techniques, especially deep neural networks, are widely used to identify phases and phase transition and speed up the simulations in variety of physical models. However, deep neural networks are often required to model the energy or partition function, which results in heavy computational cost and also limits the application to large system. In this work, we proposed a structured self-attention neural network approach, inspired by mean field theory, to represent the original Hamiltonian via well-structured neural layers. We experiment with the ring exchange Ising model and double exchange model, and show these two models can be well represented by pseudo spin layers and local field layers. Quite different from the sequential neural network, which stores all the information in every single layer, the separation strategy achieves significantly less training time, higher accuracy and straight forward application to large system. We therefore believe the new structural network would also be highly efficient to identify different phases and accelerate the numerical simulations for even complex models
–
Presenters
-
Junwei Liu
Hong Kong University of Science and Technology, The Hong Kong University of Science and Technology, Department of Physics, Hong Kong University of Science and Technology
Authors
-
Junwei Liu
Hong Kong University of Science and Technology, The Hong Kong University of Science and Technology, Department of Physics, Hong Kong University of Science and Technology
-
Yang Zhang
Max Planck Institute for Chemical Physics of Solids
-
Yujun zhao
Hong Kong University of Science and Technology