5 research outputs found
A Neuron as a Signal Processing Device
A neuron is a basic physiological and computational unit of the brain. While
much is known about the physiological properties of a neuron, its computational
role is poorly understood. Here we propose to view a neuron as a signal
processing device that represents the incoming streaming data matrix as a
sparse vector of synaptic weights scaled by an outgoing sparse activity vector.
Formally, a neuron minimizes a cost function comprising a cumulative squared
representation error and regularization terms. We derive an online algorithm
that minimizes such cost function by alternating between the minimization with
respect to activity and with respect to synaptic weights. The steps of this
algorithm reproduce well-known physiological properties of a neuron, such as
weighted summation and leaky integration of synaptic inputs, as well as an
Oja-like, but parameter-free, synaptic learning rule. Our theoretical framework
makes several predictions, some of which can be verified by the existing data,
others require further experiments. Such framework should allow modeling the
function of neuronal circuits without necessarily measuring all the microscopic
biophysical parameters, as well as facilitate the design of neuromorphic
electronics.Comment: 2013 Asilomar Conference on Signals, Systems and Computers, see
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=681029
Kickback cuts Backprop's red-tape: Biologically plausible credit assignment in neural networks
Error backpropagation is an extremely effective algorithm for assigning
credit in artificial neural networks. However, weight updates under Backprop
depend on lengthy recursive computations and require separate output and error
messages -- features not shared by biological neurons, that are perhaps
unnecessary. In this paper, we revisit Backprop and the credit assignment
problem. We first decompose Backprop into a collection of interacting learning
algorithms; provide regret bounds on the performance of these sub-algorithms;
and factorize Backprop's error signals. Using these results, we derive a new
credit assignment algorithm for nonparametric regression, Kickback, that is
significantly simpler than Backprop. Finally, we provide a sufficient condition
for Kickback to follow error gradients, and show that Kickback matches
Backprop's performance on real-world regression benchmarks.Comment: 7 pages. To appear, AAAI-1