Investigating neuronal populations that are truly large – can we keep our models small?
ORAL
Abstract
Recent technological breakthroughs in large-scale neural recordings have ushered in a new era, in which we can simultaneously monitor the activity of thousands of neurons. To write down minimal models for the collective behavior of these large populations of cells, we seek theoretical approaches that will help us simplify the rich dynamics. We focus on the neural activity underlying the behavior of mice running in a virtual environment, and model the complex activity exhibited by more than a thousand neurons simultaneously active in hippocampus. First, we show that we can reliably build minimal (maximum entropy) models for different subsets of neurons out of the whole population. These models of smaller networks tend to have more predictive power and behave more similarly to one another if the participating cells are in spatial proximity. Next, we look at the correlation matrix across the system as a whole, and explore methods from random matrix theory that may allow us to recover estimates of the true eigenvalue spectrum for these correlations. Finally, we use different coarse graining methods, in the spirit of the renormalization group, to uncover macroscopic features of the large network. We find hints that the behavior of the system is controlled by a non-trivial fixed point.
–
Presenters
-
Leenoy Meshulam
Princeton University
Authors
-
Leenoy Meshulam
Princeton University
-
jeffrey Gauthier
Princeton University
-
Carlos Brody
Princeton University
-
David Tank
Princeton University
-
William Bialek
Princeton University, Physics, Princeton University, Department of Physics, Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton Univ, Princeton University and The Graduate Center, CUNY