A case study in neural networks for scientific data: generating atomic structures

Invited

Abstract

Expertise in both the scientific domain of interest and deep learning techniques are essential in order to properly translate scientific problems into tasks amenable to deep learning. Scientific data has a lot of context; for example, the laws of physics obey certain symmetries. Can the network learn this context from the data or should we impose this context as constraints in our network or training procedures? Additionally, scientific data representations, neural network operations and appropriate loss functions desirable for scientific applications can be very different from those most prevalent in the deep learning literature; when is it appropriate to use existing methods and when is it necessary to develop new ones?

We present examples of these challenges when applying deep learning techniques to the generation of atomic systems (new atomic arrangements that may be crystals, molecules, nanoclusters, polymers, proteins, etc.). We present a novel rotation-equivariant convolutional neural network -- or tensor field network -- that has the ability to articulate, recognize and differentiate local and global features in any orientation in complex atomic systems. We discuss strategies for generating hypothetical atomic structures using the concepts of geometric motifs (the recurring patterns of atoms in materials) and neural networks that can manipulate discrete geometry. We present the use of toy models to test the expressiveness and accuracy of tensor field network operations.

Presenters

  • Tess Smidt

    Computational Research Division, Lawrence Berkeley National Laboratory

Authors

  • Tess Smidt

    Computational Research Division, Lawrence Berkeley National Laboratory