91 research outputs found
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.Comment: (Under review
Recommended from our members
Developing Next-generation Brain Sensing Technologies - A Review.
Advances in sensing technology raise the possibility of creating neural interfaces that can more effectively restore or repair neural function and reveal fundamental properties of neural information processing. To realize the potential of these bioelectronic devices, it is necessary to understand the capabilities of emerging technologies and identify the best strategies to translate these technologies into products and therapies that will improve the lives of patients with neurological and other disorders. Here we discuss emerging technologies for sensing brain activity, anticipated challenges for translation, and perspectives for how to best transition these technologies from academic research labs to useful products for neuroscience researchers and human patients
Hybrid CMOS/memristor circuits
Abstract — This is a brief review of recent work on the prospective hybrid CMOS/memristor circuits. Such hybrids combine the flexibility, reliability and high functionality of the CMOS subsystem with very high density of nanoscale thin film resistance switching devices operating on different physical principles. Simulation and initial experimental results demonstrate that performance of CMOS/memristor circuits for several important applications is well beyond scaling limits of conventional VLSI paradigm. I
Mixed-precision deep learning based on computational memory
Deep neural networks (DNNs) have revolutionized the field of artificial
intelligence and have achieved unprecedented success in cognitive tasks such as
image and speech recognition. Training of large DNNs, however, is
computationally intensive and this has motivated the search for novel computing
architectures targeting this application. A computational memory unit with
nanoscale resistive memory devices organized in crossbar arrays could store the
synaptic weights in their conductance states and perform the expensive weighted
summations in place in a non-von Neumann manner. However, updating the
conductance states in a reliable manner during the weight update process is a
fundamental challenge that limits the training accuracy of such an
implementation. Here, we propose a mixed-precision architecture that combines a
computational memory unit performing the weighted summations and imprecise
conductance updates with a digital processing unit that accumulates the weight
updates in high precision. A combined hardware/software training experiment of
a multilayer perceptron based on the proposed architecture using a phase-change
memory (PCM) array achieves 97.73% test accuracy on the task of classifying
handwritten digits (based on the MNIST dataset), within 0.6% of the software
baseline. The architecture is further evaluated using accurate behavioral
models of PCM on a wide class of networks, namely convolutional neural
networks, long-short-term-memory networks, and generative-adversarial networks.
Accuracies comparable to those of floating-point implementations are achieved
without being constrained by the non-idealities associated with the PCM
devices. A system-level study demonstrates 173x improvement in energy
efficiency of the architecture when used for training a multilayer perceptron
compared with a dedicated fully digital 32-bit implementation
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Toward a formal theory for computing machines made out of whatever physics offers: extended version
Approaching limitations of digital computing technologies have spurred
research in neuromorphic and other unconventional approaches to computing. Here
we argue that if we want to systematically engineer computing systems that are
based on unconventional physical effects, we need guidance from a formal theory
that is different from the symbolic-algorithmic theory of today's computer
science textbooks. We propose a general strategy for developing such a theory,
and within that general view, a specific approach that we call "fluent
computing". In contrast to Turing, who modeled computing processes from a
top-down perspective as symbolic reasoning, we adopt the scientific paradigm of
physics and model physical computing systems bottom-up by formalizing what can
ultimately be measured in any physical substrate. This leads to an
understanding of computing as the structuring of processes, while classical
models of computing systems describe the processing of structures.Comment: 76 pages. This is an extended version of a perspective article with
the same title that will appear in Nature Communications soon after this
manuscript goes public on arxi
- …