6,687 research outputs found
Asynchronous spiking neurons, the natural key to exploit temporal sparsity
Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms
STDP-driven networks and the \emph{C. elegans} neuronal network
We study the dynamics of the structure of a formal neural network wherein the
strengths of the synapses are governed by spike-timing-dependent plasticity
(STDP). For properly chosen input signals, there exists a steady state with a
residual network. We compare the motif profile of such a network with that of a
real neural network of \emph{C. elegans} and identify robust qualitative
similarities. In particular, our extensive numerical simulations show that this
STDP-driven resulting network is robust under variations of the model
parameters.Comment: 16 pages, 14 figure
Hybrid language processing in the Spoken Language Translator
The paper presents an overview of the Spoken Language Translator (SLT)
system's hybrid language-processing architecture, focussing on the way in which
rule-based and statistical methods are combined to achieve robust and efficient
performance within a linguistically motivated framework. In general, we argue
that rules are desirable in order to encode domain-independent linguistic
constraints and achieve high-quality grammatical output, while corpus-derived
statistics are needed if systems are to be efficient and robust; further, that
hybrid architectures are superior from the point of view of portability to
architectures which only make use of one type of information. We address the
topics of ``multi-engine'' strategies for robust translation; robust bottom-up
parsing using pruning and grammar specialization; rational development of
linguistic rule-sets using balanced domain corpora; and efficient supervised
training by interactive disambiguation. All work described is fully implemented
in the current version of the SLT-2 system.Comment: 4 pages, uses icassp97.sty; to appear in ICASSP-97; see
http://www.cam.sri.com for related materia
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
Synthesis of neural networks for spatio-temporal spike pattern recognition and processing
The advent of large scale neural computational platforms has highlighted the
lack of algorithms for synthesis of neural structures to perform predefined
cognitive tasks. The Neural Engineering Framework offers one such synthesis,
but it is most effective for a spike rate representation of neural information,
and it requires a large number of neurons to implement simple functions. We
describe a neural network synthesis method that generates synaptic connectivity
for neurons which process time-encoded neural signals, and which makes very
sparse use of neurons. The method allows the user to specify, arbitrarily,
neuronal characteristics such as axonal and dendritic delays, and synaptic
transfer functions, and then solves for the optimal input-output relationship
using computed dendritic weights. The method may be used for batch or online
learning and has an extremely fast optimization process. We demonstrate its use
in generating a network to recognize speech which is sparsely encoded as spike
times.Comment: In submission to Frontiers in Neuromorphic Engineerin
On Statistical Aspects of Qjets
The process by which jet algorithms construct jets and subjets is inherently
ambiguous and equally well motivated algorithms often return very different
answers. The Qjets procedure was introduced by the authors to account for this
ambiguity by considering many reconstructions of a jet at once, allowing one to
assign a weight to each interpretation of the jet. Employing these weighted
interpretations leads to an improvement in the statistical stability of many
measurements. Here we explore in detail the statistical properties of these
sets of weighted measurements and demonstrate how they can be used to improve
the reach of jet-based studies.Comment: 29 pages, 6 figures. References added, minor modification of the
text. This version to appear in JHE
- …