1,200 research outputs found
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Neuromorphic chips embody computational principles operating in the nervous
system, into microelectronic devices. In this domain it is important to
identify computational primitives that theory and experiments suggest as
generic and reusable cognitive elements. One such element is provided by
attractor dynamics in recurrent networks. Point attractors are equilibrium
states of the dynamics (up to fluctuations), determined by the synaptic
structure of the network; a `basin' of attraction comprises all initial states
leading to a given attractor upon relaxation, hence making attractor dynamics
suitable to implement robust associative memory. The initial network state is
dictated by the stimulus, and relaxation to the attractor state implements the
retrieval of the corresponding memorized prototypical pattern. In a previous
work we demonstrated that a neuromorphic recurrent network of spiking neurons
and suitably chosen, fixed synapses supports attractor dynamics. Here we focus
on learning: activating on-chip synaptic plasticity and using a theory-driven
strategy for choosing network parameters, we show that autonomous learning,
following repeated presentation of simple visual stimuli, shapes a synaptic
connectivity supporting stimulus-selective attractors. Associative memory
develops on chip as the result of the coupled stimulus-driven neural activity
and ensuing synaptic dynamics, with no artificial separation between learning
and retrieval phases.Comment: submitted to Scientific Repor
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons
Networks of stochastic spiking neurons are interesting models in the area of
Theoretical Neuroscience, presenting both continuous and discontinuous phase
transitions. Here we study fully connected networks analytically, numerically
and by computational simulations. The neurons have dynamic gains that enable
the network to converge to a stationary slightly supercritical state
(self-organized supercriticality or SOSC) in the presence of the continuous
transition. We show that SOSC, which presents power laws for neuronal
avalanches plus some large events, is robust as a function of the main
parameter of the neuronal gain dynamics. We discuss the possible applications
of the idea of SOSC to biological phenomena like epilepsy and dragon king
avalanches. We also find that neuronal gains can produce collective
oscillations that coexists with neuronal avalanches, with frequencies
compatible with characteristic brain rhythms.Comment: 16 pages, 16 figures divided into 7 figures in the articl
Robust short-term memory without synaptic learning
Short-term memory in the brain cannot in general be explained the way
long-term memory can -- as a gradual modification of synaptic weights -- since
it takes place too quickly. Theories based on some form of cellular
bistability, however, do not seem able to account for the fact that noisy
neurons can collectively store information in a robust manner. We show how a
sufficiently clustered network of simple model neurons can be instantly induced
into metastable states capable of retaining information for a short time (a few
seconds). The mechanism is robust to different network topologies and kinds of
neural model. This could constitute a viable means available to the brain for
sensory and/or short-term memory with no need of synaptic learning. Relevant
phenomena described by neurobiology and psychology, such as local
synchronization of synaptic inputs and power-law statistics of forgetting
avalanches, emerge naturally from this mechanism, and we suggest possible
experiments to test its viability in more biological settings.Comment: 20 pages, 9 figures. Amended to include section on spiking neurons,
with general rewrit
- …