Diffusion RNN: Extracting Low-Dimensional Structures in Data as Quasi-stable Manifolds
ORAL
Abstract
A hallmark of intelligence is discerning simple patterns in complex, noisy environments. How can a disordered, autonomous neural system identify low-dimensional structures in noisy, high-dimensional data? We introduce Diffusion RNN (Recurrent Neural Network), a neural-dynamical model that extracts hierarchies of low-dimensional manifolds from observed data. This model learns from data a reverse-time Langevin process that iteratively filters noise from a Gaussian cloud initialized at time T, revealing the learned data distribution at time 0. Unlike generative AI diffusion models, diffusion RNNs are fully recurrent and can operate beyond time 0. When run in negative time, they produce nonlinear manifolds - such as 2D shells, 1D skeletons, and 0D fixed points - that summarize the higher-dimensional data. These structures could serve as coarse maps for navigating complex memory and concept landscapes. Our results also show that a single Diffusion RNN can adaptively manipulate these low-dimensional manifolds to interpolate and extrapolate between multiple target distributions. Lastly, we link the hierarchy of learned manifolds to the separation of time scales in RNN dynamics and evaluate the robustness of Diffusion RNN as a nonlinear dimensionality reduction method.
–
Publication: A paper is under drafting by the same title: Diffusion RNN: Extracting Low-Dimensional Structures in Data as Quasi-stable Manifolds
Presenters
-
Toni J Liu
Cornell University
Authors
-
Toni J Liu
Cornell University
-
Jason Z Kim
Cornell University