Nonlinear classification of neural manifolds with context information: geometrical properties and storage capacity
ORAL
Abstract
Understanding how neural systems process information through high-dimensional representations presents a fundamental challenge that lies at the interface of theoretical neuroscience and machine learning. A commonly adopted approach to tackle this problem involves the analysis of statistical and geometrical attributes that link neural activity to task implementation in high-dimensional spaces. Here, we explore an analytically-solvable classification model that derives its decision-making rules from a collection of input-dependent "expert" neurons, each associated with distinct contexts through half-space gating mechanisms. This formulation allows to consider non-linearly separable tasks. We investigate the interplay between the geometry of object representations and the correlations within the context functions. By examining these connections, we aim to elucidate how these properties influence the disentanglement of representations, which we measure through the storage capacity.
–
Presenters
-
Francesca Mignacco
CUNY Graduate Center
Authors
-
Francesca Mignacco
CUNY Graduate Center
-
Chi-Ning Chou
Flatiron Institute
-
SueYeon Chung
New York University