Variable Memory: Beyond the Fixed Memory Assumption in Memory Modeling
POSTER
Abstract
Memory models play a pivotal role in elucidating the mechanisms through which biological and artificial neural networks store and retrieve information. Traditionally, these models assume that memories are pre-determined, fixed before inference, and stored within synaptic interactions. Yet, neural networks can also dynamically store memories available only during inference within their activity. This capacity to bind and manipulate information as variables enhances the generalization capabilities of neural networks. Our research introduces and explores the concept of "variable memories." This approach extends the conventional sequence memory models, enabling information binding directly in network activity. By adopting this novel memory perspective, we unveil the underlying computational processes in the learned weights of recurrent neural networks on simple algorithmic tasks -- a fundamental question in the mechanistic understanding of neural networks. Our results underscore the imperative to evolve memory models beyond the fixed memory assumption towards more dynamic and flexible memory systems to further our understanding of neural information processing.
Publication: Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks. In Submitted to The Twelfth International Conference on Learning Representations 2024.
Episodic Memory Theory of Recurrent Neural Networks: Insights into Long-Term Information Storage and Manipulation.
Proceedings of the 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML) at the 40 th International Conference on Machine Learning, Honolulu, Hawaii,
USA. 2023.
Presenters
-
Arjun Karuvally
University of Massachusetts Amherst
Authors
-
Arjun Karuvally
University of Massachusetts Amherst
-
Hava T Siegelmann
University of Massachusetts Amherst