A Comprehensive Large-Scale Model of the Primary Visual Cortex (V1)
ORAL
Abstract
The majority of mean-field models of the primary visual cortex (V1) are limited in their capacity to represent local features adequately or are unable to reconcile key experimental observations within a single framework. In this work, we introduce a comprehensive retinotopic model of V1 that surmounts these limitations. Our model is based on ORGaNICs, a stochastic recurrent circuit framework, implementing divisive normalization via recurrent amplification, and it incorporates long-range connections. Specifically, we simulate the membrane potentials and firing rates of simple and complex V1 neurons driven by the outputs of a steerable pyramid, thus capturing the spatial frequency, size, and orientation tuning of the neurons. Then, using the theory of stochastic LTI systems, we demonstrate that, for a grating response, the circuit oscillates at the observed (gamma) frequency and accurately captures the contrast dependence of gamma activity, Fano factor, noise correlations, and LFP coherence (measured across neuronal populations at different spatial locations, with different orientation tuning and receptive field size). Finally, we predict these quantities in the context of realistic stimuli: gabor, plaids, and natural images.
* This work was supported by NIH grant 1R01EY035242-01.
–
Presenters
-
Shivang Rawat
New York University
Authors
-
Shivang Rawat
New York University
-
David J Heeger
New York University
-
Stefano Martiniani
New York University