The Ensemble Peels the Onion: Emergent Learning Via Sequential Error Mode Reduction

ORAL

Abstract

Machine learning methods typically use gradient descent – a global, computationally-intensive algorithm – to optimally modify every parameter. Our Learning meta-materials1-3 perform machine learning differently; each element follows local update rules, and global learning emerges from these dynamics. This is a feature shared with the brain, albeit with different rules and dynamics. Here, we investigate emergent learning in a physically-breadboarded nonlinear electronic learning meta-material, and show that polynomial modes of the error are reduced in order: mean, slope, curvature. This ordering persists regardless of the relative sizes of these modes. Our circuitry trains in seconds and performs learned tasks in microseconds, dissipating picojoules of power across each element. Future versions have enormous potential to be faster and more efficient than state-of-the-art machine learning solutions, while providing additional benefits like robustness to manufacturing defects.



[1] S Dillavou et. al. Phys Rev Applied 2022 [2] M Stern et. al. Phys Rev Res 2022 [3] JF Wycoff et. al. J Chem Phys 2022

* UPenn MRSEC/DMR-1720530, MRSEC/DMR-DMR-2309043, DMR-2005749, Simons Foundation # 327939, DOE DE-SC0020963, UPenn DDDI.

Presenters

  • Sam J Dillavou

    University of Pennsylvania

Authors

  • Sam J Dillavou

    University of Pennsylvania

  • Benjamin D Beyer

    University of Pennsylvania

  • Menachem Stern

    University of Pennsylvania

  • Marc Z Miskin

    University of Pennsylvania

  • Andrea J Liu

    University of Pennsylvania

  • Douglas J Durian

    University of Pennsylvania