63,218 research outputs found
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Chimeras in Leaky Integrate-and-Fire Neural Networks: Effects of Reflecting Connectivities
The effects of nonlocal and reflecting connectivity are investigated in
coupled Leaky Integrate-and-Fire (LIF) elements, which assimilate the exchange
of electrical signals between neurons. Earlier investigations have demonstrated
that non-local and hierarchical network connectivity often induces complex
synchronization patterns and chimera states in systems of coupled oscillators.
In the LIF system we show that if the elements are non-locally linked with
positive diffusive coupling in a ring architecture the system splits into a
number of alternating domains. Half of these domains contain elements, whose
potential stays near the threshold, while they are interrupted by active
domains, where the elements perform regular LIF oscillations. The active
domains move around the ring with constant velocity, depending on the system
parameters. The idea of introducing reflecting non-local coupling in LIF
networks originates from signal exchange between neurons residing in the two
hemispheres in the brain. We show evidence that this connectivity induces novel
complex spatial and temporal structures: for relatively extensive ranges of
parameter values the system splits in two coexisting domains, one domain where
all elements stay near-threshold and one where incoherent states develop with
multileveled mean phase velocity distribution.Comment: 12 pages, 12 figure
Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking
This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications
Cell assembly dynamics of sparsely-connected inhibitory networks: a simple model for the collective activity of striatal projection neurons
Striatal projection neurons form a sparsely-connected inhibitory network, and
this arrangement may be essential for the appropriate temporal organization of
behavior. Here we show that a simplified, sparse inhibitory network of
Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal
population activity, as observed in brain slices [Carrillo-Reid et al., J.
Neurophysiology 99 (2008) 1435{1450]. In particular we develop a new metric to
determine the conditions under which sparse inhibitory networks form
anti-correlated cell assemblies with time-varying activity of individual cells.
We found that under these conditions the network displays an input-specific
sequence of cell assembly switching, that effectively discriminates similar
inputs. Our results support the proposal [Ponzi and Wickens, PLoS Comp Biol 9
(2013) e1002954] that GABAergic connections between striatal projection neurons
allow stimulus-selective, temporally-extended sequential activation of cell
assemblies. Furthermore, we help to show how altered intrastriatal GABAergic
signaling may produce aberrant network-level information processing in
disorders such as Parkinson's and Huntington's diseases.Comment: 22 pages, 9 figure
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex
Neocortical neurons have thousands of excitatory synapses. It is a mystery
how neurons integrate the input from so many synapses and what kind of
large-scale network behavior this enables. It has been previously proposed that
non-linear properties of dendrites enable neurons to recognize multiple
patterns. In this paper we extend this idea by showing that a neuron with
several thousand synapses arranged along active dendrites can learn to
accurately and robustly recognize hundreds of unique patterns of cellular
activity, even in the presence of large amounts of noise and pattern variation.
We then propose a neuron model where some of the patterns recognized by a
neuron lead to action potentials and define the classic receptive field of the
neuron, whereas the majority of the patterns recognized by a neuron act as
predictions by slightly depolarizing the neuron without immediately generating
an action potential. We then present a network model based on neurons with
these properties and show that the network learns a robust model of time-based
sequences. Given the similarity of excitatory neurons throughout the neocortex
and the importance of sequence memory in inference and behavior, we propose
that this form of sequence memory is a universal property of neocortical
tissue. We further propose that cellular layers in the neocortex implement
variations of the same sequence memory algorithm to achieve different aspects
of inference and behavior. The neuron and network models we introduce are
robust over a wide range of parameters as long as the network uses a sparse
distributed code of cellular activations. The sequence capacity of the network
scales linearly with the number of synapses on each neuron. Thus neurons need
thousands of synapses to learn the many temporal patterns in sensory stimuli
and motor sequences.Comment: Submitted for publicatio
- …