1,694 research outputs found
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Multi-layered Spiking Neural Network with Target Timestamp Threshold Adaptation and STDP
Spiking neural networks (SNNs) are good candidates to produce
ultra-energy-efficient hardware. However, the performance of these models is
currently behind traditional methods. Introducing multi-layered SNNs is a
promising way to reduce this gap. We propose in this paper a new threshold
adaptation system which uses a timestamp objective at which neurons should
fire. We show that our method leads to state-of-the-art classification rates on
the MNIST dataset (98.60%) and the Faces/Motorbikes dataset (99.46%) with an
unsupervised SNN followed by a linear SVM. We also investigate the sparsity
level of the network by testing different inhibition policies and STDP rules
The Microsoft 2017 Conversational Speech Recognition System
We describe the 2017 version of Microsoft's conversational speech recognition
system, in which we update our 2016 system with recent developments in
neural-network-based acoustic and language modeling to further advance the
state of the art on the Switchboard speech recognition task. The system adds a
CNN-BLSTM acoustic model to the set of model architectures we combined
previously, and includes character-based and dialog session aware LSTM language
models in rescoring. For system combination we adopt a two-stage approach,
whereby subsets of acoustic models are first combined at the senone/frame
level, followed by a word-level voting via confusion networks. We also added a
confusion network rescoring step after system combination. The resulting system
yields a 5.1\% word error rate on the 2000 Switchboard evaluation set
Fast ConvNets Using Group-wise Brain Damage
We revisit the idea of brain damage, i.e. the pruning of the coefficients of
a neural network, and suggest how brain damage can be modified and used to
speedup convolutional layers. The approach uses the fact that many efficient
implementations reduce generalized convolutions to matrix multiplications. The
suggested brain damage process prunes the convolutional kernel tensor in a
group-wise fashion by adding group-sparsity regularization to the standard
training process. After such group-wise pruning, convolutions can be reduced to
multiplications of thinned dense matrices, which leads to speedup. In the
comparison on AlexNet, the method achieves very competitive performance
- …