Margin learning in spiking neural networks
Invited
Abstract
Neurons in the brain receive inputs from thousands of afferents. The high dimensionality of neural input spaces is tightly linked to the ability of neurons to realize difficult classification tasks through simple decision surfaces. However, this advantage of high dimensional neural representations comes at a price: Learning is difficult in high dimensional spaces. In particular, a neuron's ability to generalize from a limited number of training examples can be impaired by overfitting when the number of free parameters, i.e.synaptic efficacies, is large.
Constituting a major breakthrough in machine learning, learning in high-dimensional spaces has been greatly improved by margin techniques that maximize the minimal distance between available training examples and the learned decision boundary. Maximal margin techniques have been applied successfully in artificial neural networks with graded responses where standard metrics apply. By contrast, the use of margin learning has been much less straight forward in networks of spiking neurons that consist of neuron models that (like nerve cells in the brain) respond to inputs by eliciting trains of discrete all-or-nothing events (action potentials).
Recently, we have introduced the spike-threshold-surface to define a continuous distance between the responses of spiking neurons. Here we extend this notion to capture the margins between responses of spiking neurons. We show that a family of gradient-based learning rules that operate on these margins strongly improves the learning capabilities of spiking neurons. We discuss their biologically plausible implementation through empirically observed synaptic learning rules. This work transfers powerful margin-based learning concepts from machine to neurobiological learning.
In collaboration with Timo Wunderlich
Constituting a major breakthrough in machine learning, learning in high-dimensional spaces has been greatly improved by margin techniques that maximize the minimal distance between available training examples and the learned decision boundary. Maximal margin techniques have been applied successfully in artificial neural networks with graded responses where standard metrics apply. By contrast, the use of margin learning has been much less straight forward in networks of spiking neurons that consist of neuron models that (like nerve cells in the brain) respond to inputs by eliciting trains of discrete all-or-nothing events (action potentials).
Recently, we have introduced the spike-threshold-surface to define a continuous distance between the responses of spiking neurons. Here we extend this notion to capture the margins between responses of spiking neurons. We show that a family of gradient-based learning rules that operate on these margins strongly improves the learning capabilities of spiking neurons. We discuss their biologically plausible implementation through empirically observed synaptic learning rules. This work transfers powerful margin-based learning concepts from machine to neurobiological learning.
In collaboration with Timo Wunderlich
–
Presenters
-
Robert Gütig
Mathematical modeling of neuronal learning, Charite Medical School Berlin
Authors
-
Robert Gütig
Mathematical modeling of neuronal learning, Charite Medical School Berlin