30 research outputs found
A Neuron as a Signal Processing Device
A neuron is a basic physiological and computational unit of the brain. While
much is known about the physiological properties of a neuron, its computational
role is poorly understood. Here we propose to view a neuron as a signal
processing device that represents the incoming streaming data matrix as a
sparse vector of synaptic weights scaled by an outgoing sparse activity vector.
Formally, a neuron minimizes a cost function comprising a cumulative squared
representation error and regularization terms. We derive an online algorithm
that minimizes such cost function by alternating between the minimization with
respect to activity and with respect to synaptic weights. The steps of this
algorithm reproduce well-known physiological properties of a neuron, such as
weighted summation and leaky integration of synaptic inputs, as well as an
Oja-like, but parameter-free, synaptic learning rule. Our theoretical framework
makes several predictions, some of which can be verified by the existing data,
others require further experiments. Such framework should allow modeling the
function of neuronal circuits without necessarily measuring all the microscopic
biophysical parameters, as well as facilitate the design of neuromorphic
electronics.Comment: 2013 Asilomar Conference on Signals, Systems and Computers, see
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=681029
Efficient Computation in Adaptive Artificial Spiking Neural Networks
Artificial Neural Networks (ANNs) are bio-inspired models of neural
computation that have proven highly effective. Still, ANNs lack a natural
notion of time, and neural units in ANNs exchange analog values in a
frame-based manner, a computationally and energetically inefficient form of
communication. This contrasts sharply with biological neurons that communicate
sparingly and efficiently using binary spikes. While artificial Spiking Neural
Networks (SNNs) can be constructed by replacing the units of an ANN with
spiking neurons, the current performance is far from that of deep ANNs on hard
benchmarks and these SNNs use much higher firing rates compared to their
biological counterparts, limiting their efficiency. Here we show how spiking
neurons that employ an efficient form of neural coding can be used to construct
SNNs that match high-performance ANNs and exceed state-of-the-art in SNNs on
important benchmarks, while requiring much lower average firing rates. For
this, we use spike-time coding based on the firing rate limiting adaptation
phenomenon observed in biological spiking neurons. This phenomenon can be
captured in adapting spiking neuron models, for which we derive the effective
transfer function. Neural units in ANNs trained with this transfer function can
be substituted directly with adaptive spiking neurons, and the resulting
Adaptive SNNs (AdSNNs) can carry out inference in deep neural networks using up
to an order of magnitude fewer spikes compared to previous SNNs. Adaptive
spike-time coding additionally allows for the dynamic control of neural coding
precision: we show how a simple model of arousal in AdSNNs further halves the
average required firing rate and this notion naturally extends to other forms
of attention. AdSNNs thus hold promise as a novel and efficient model for
neural computation that naturally fits to temporally continuous and
asynchronous applications
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
Recurrent neural networks (RNNs) are widely used in computational
neuroscience and machine learning applications. In an RNN, each neuron computes
its output as a nonlinear function of its integrated input. While the
importance of RNNs, especially as models of brain processing, is undisputed, it
is also widely acknowledged that the computations in standard RNN models may be
an over-simplification of what real neuronal networks compute. Here, we suggest
that the RNN approach may be made both neurobiologically more plausible and
computationally more powerful by its fusion with Bayesian inference techniques
for nonlinear dynamical systems. In this scheme, we use an RNN as a generative
model of dynamic input caused by the environment, e.g. of speech or kinematics.
Given this generative RNN model, we derive Bayesian update equations that can
decode its output. Critically, these updates define a 'recognizing RNN' (rRNN),
in which neurons compute and exchange prediction and prediction error messages.
The rRNN has several desirable features that a conventional RNN does not have,
for example, fast decoding of dynamic stimuli and robustness to initial
conditions and noise. Furthermore, it implements a predictive coding scheme for
dynamic inputs. We suggest that the Bayesian inversion of recurrent neural
networks may be useful both as a model of brain function and as a machine
learning tool. We illustrate the use of the rRNN by an application to the
online decoding (i.e. recognition) of human kinematics
Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks
Biological neurons communicate with a sparing exchange of pulses - spikes. It
is an open question how real spiking neurons produce the kind of powerful
neural computation that is possible with deep artificial neural networks, using
only so very few spikes to communicate. Building on recent insights in
neuroscience, we present an Adapting Spiking Neural Network (ASNN) based on
adaptive spiking neurons. These spiking neurons efficiently encode information
in spike-trains using a form of Asynchronous Pulsed Sigma-Delta coding while
homeostatically optimizing their firing rate. In the proposed paradigm of
spiking neuron computation, neural adaptation is tightly coupled to synaptic
plasticity, to ensure that downstream neurons can correctly decode upstream
spiking neurons. We show that this type of network is inherently able to carry
out asynchronous and event-driven neural computation, while performing
identical to corresponding artificial neural networks (ANNs). In particular, we
show that these adaptive spiking neurons can be drop in replacements for ReLU
neurons in standard feedforward ANNs comprised of such units. We demonstrate
that this can also be successfully applied to a ReLU based deep convolutional
neural network for classifying the MNIST dataset. The ASNN thus outperforms
current Spiking Neural Networks (SNNs) implementations, while responding (up
to) an order of magnitude faster and using an order of magnitude fewer spikes.
Additionally, in a streaming setting where frames are continuously classified,
we show that the ASNN requires substantially fewer network updates as compared
to the corresponding ANN