Random input expansion improves classifier accuracy
ORAL
Abstract
We have discovered a surprising phenomena in the problem of inference with noisy high dimensional data: that adding random input dimensions (completely uncorrelated to the data) can lead to lower generalization error when training max-margin classifiers. When applied appropriately, we prove that this expansion of the network can yield equivalent solutions to the addition of slack variables in support vector machine learning. We also consider two layer networks and demonstrate that this phenomena can allow wide random neural networks with sparse activity to handle output noise more effectively than networks exactly matched to the structure of the teacher network. This finding has implications for the design of neural networks and in understanding the role of neurogenesis and short-lived synapses in biological neural network structures.
–
Presenters
-
Julia Steinberg
Harvard University
Authors
-
Julia Steinberg
Harvard University
-
Madhu Advani
Harvard University
-
Haim Sompolinsky
Harvard University