Revisiting Scaling Assumptions of Ab-Initio Calculations

ORAL

Abstract

Density functional theory and ab initio calculations dominate the share of computational physics calculations on high performance compute clusters across the world. A lot of the code driving these calculations (and the related quantum chemistry calculations) are both closed source (e.g., ORCA) and even commercial (e.g., VASP). The scaling of these methods is typically taken to be O(N^3) in time, based on the number of electrons. Of late, machine learning models have been touted as viable alternatives for true ab initio calculations, often using automatic differentiation. We will present the poor scaling of machine learning techniques in space and time, and demonstrate how advances in computational methods and hardware (GPUs, TPUs) have made these "back of the envelope" scaling calculations redundant. Additionally, we present results using complex numbers for generating near-analytic function and gradient values with high accuracy and at a fraction of the computational effort of using finite difference methods (for the same accuracy) while also requiring none of the high-memory usage of automatic differentiation (which stores in memory a complete computational graph for each calculation). It is expected that these results will hopefully prevent further misinformed discussions on the scaling of ab-initio methods and clarify the true scaling of these methods when implemented with modern techniques directed at modern hardware.

* Partially supported by the Icelandic Research Fund, grant no. 217436-052.

Publication: 1. On Scaling Derivatives in Machine Learned Force Fields (to be published), J. Comp. Phys.
2. Dispelling myths of ab-inito scaling (to be published) J. Comp. Chem.

Presenters

  • Rohit Goswami

    Science Institute, University of Iceland & Quansight Labs,TX

Authors

  • Rohit Goswami

    Science Institute, University of Iceland & Quansight Labs,TX