Precise Spatial Memory in Local Random Networks
ORAL
Abstract
Self-sustained, elevated neuronal activity persisting on time scales of ten seconds or longer is vital for working memory. The most prevalent models for persistent activity, known as attractor networks, have come under criticism for their severe reliance on fine-tuning of synaptic architectures. While alternative frameworks exist, many of these invoke fine-tuning implicitly. Here we elaborate a model with local connectivity which, when combined with a global regulation of the mean firing rate, produces localized, finely spaced discrete attractors that persist in time and effectively span a planar manifold. Synaptic strengths are drawn randomly, so that the model remains minimally structured, requires no training or low-level fine-tuning to store memories, and may be of interest in modeling such biological phenomena as visuospatial working memory in two dimensions.
–
Presenters
-
Joseph Natale
Emory University
Authors
-
Joseph Natale
Emory University
-
H G E Hentschel
Emory University
-
Ilya Nemenman
Emory University, Physics, Emory University