173 research outputs found
A bio-inspired bistable recurrent cell allows for long-lasting memory
Recurrent neural networks (RNNs) provide state-of-the-art performances in a
wide variety of tasks that require memory. These performances can often be
achieved thanks to gated recurrent cells such as gated recurrent units (GRU)
and long short-term memory (LSTM). Standard gated cells share a layer internal
state to store information at the network level, and long term memory is shaped
by network-wide recurrent connection weights. Biological neurons on the other
hand are capable of holding information at the cellular level for an arbitrary
long amount of time through a process called bistability. Through bistability,
cells can stabilize to different stable states depending on their own past
state and inputs, which permits the durable storing of past information in
neuron state. In this work, we take inspiration from biological neuron
bistability to embed RNNs with long-lasting memory at the cellular level. This
leads to the introduction of a new bistable biologically-inspired recurrent
cell that is shown to strongly improves RNN performance on time-series which
require very long memory, despite using only cellular connections (all
recurrent connections are from neurons to themselves, i.e. a neuron state is
not influenced by the state of other neurons). Furthermore, equipping this cell
with recurrent neuromodulation permits to link them to standard GRU cells,
taking a step towards the biological plausibility of GRU
Spike-based computation using classical recurrent neural networks
Spiking neural networks are a type of artificial neural networks in which
communication between neurons is only made of events, also called spikes. This
property allows neural networks to make asynchronous and sparse computations
and therefore to drastically decrease energy consumption when run on
specialized hardware. However, training such networks is known to be difficult,
mainly due to the non-differentiability of the spike activation, which prevents
the use of classical backpropagation. This is because state-of-the-art spiking
neural networks are usually derived from biologically-inspired neuron models,
to which are applied machine learning methods for training. Nowadays, research
about spiking neural networks focuses on the design of training algorithms
whose goal is to obtain networks that compete with their non-spiking version on
specific tasks. In this paper, we attempt the symmetrical approach: we modify
the dynamics of a well-known, easily trainable type of recurrent neural network
to make it event-based. This new RNN cell, called the Spiking Recurrent Cell,
therefore communicates using events, i.e. spikes, while being completely
differentiable. Vanilla backpropagation can thus be used to train any network
made of such RNN cell. We show that this new network can achieve performance
comparable to other types of spiking networks in the MNIST benchmark and its
variants, the Fashion-MNIST and the Neuromorphic-MNIST. Moreover, we show that
this new cell makes the training of deep spiking networks achievable.Comment: 12 pages, 3 figure
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Spike-based computation using classical recurrent neural networks
Spiking neural networks are a type of artificial neural networks in which
communication between neurons is only made of events, also called spikes. This
property allows neural networks to make asynchronous and sparse computations
and therefore to drastically decrease energy consumption when run on
specialized hardware. However, training such networks is known to be difficult,
mainly due to the non-differentiability of the spike activation, which prevents
the use of classical backpropagation. This is because state-of-the-art spiking
neural networks are usually derived from biologically-inspired neuron models,
to which are applied machine learning methods for training. Nowadays, research
about spiking neural networks focuses on the design of training algorithms
whose goal is to obtain networks that compete with their non-spiking version on
specific tasks. In this paper, we attempt the symmetrical approach: we modify
the dynamics of a well-known, easily trainable type of recurrent neural network
to make it event-based. This new RNN cell, called the Spiking Recurrent Cell,
therefore communicates using events, i.e. spikes, while being completely
differentiable. Vanilla backpropagation can thus be used to train any network
made of such RNN cell. We show that this new network can achieve performance
comparable to other types of spiking networks in the MNIST benchmark and its
variants, the Fashion-MNIST and the Neuromorphic-MNIST. Moreover, we show that
this new cell makes the training of deep spiking networks achievable
Mechanisms of Induction and Maintenance of Spike-Timing Dependent Plasticity in Biophysical Synapse Models
We review biophysical models of synaptic plasticity, with a focus on spike-timing dependent plasticity (STDP). The common property of the discussed models is that synaptic changes depend on the dynamics of the intracellular calcium concentration, which itself depends on pre- and postsynaptic activity. We start by discussing simple models in which plasticity changes are based directly on calcium amplitude and dynamics. We then consider models in which dynamic intracellular signaling cascades form the link between the calcium dynamics and the plasticity changes. Both mechanisms of induction of STDP (through the ability of pre/postsynaptic spikes to evoke changes in the state of the synapse) and of maintenance of the evoked changes (through bistability) are discussed
Six networks on a universal neuromorphic computing substrate
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality
Behavioural robustness and the distributed mechanisms hypothesis
A current challenge in neuroscience and systems biology is to better understand properties that allow organisms to exhibit and sustain appropriate behaviours despite the effects of perturbations (behavioural robustness). There are still significant theoretical difficulties in this endeavour, mainly due to the context-dependent nature of the problem. Biological robustness, in general, is considered in the literature as a property that emerges from the internal structure of organisms, rather than being a dynamical phenomenon involving agent-internal controls, the organism body, and the environment. Our hypothesis is that the capacity for behavioural robustness is rooted in dynamical processes that are distributed between agent ‘brain’, body, and environment, rather than warranted exclusively by organisms’ internal mechanisms. Distribution is operationally defined here based on perturbation analyses.
Evolutionary Robotics (ER) techniques are used here to construct four computational models to study behavioural robustness from a systemic perspective. Dynamical systems theory provides the conceptual framework for these investigations. The first model evolves situated agents in a goalseeking scenario in the presence of neural noise perturbations. Results suggest that evolution implicitly selects neural systems that are noise-resistant during coupling behaviour by concentrating search in regions of the fitness landscape that retain functionality for goal approaching. The second model evolves situated, dynamically limited agents exhibiting minimalcognitive behaviour (categorization task). Results indicate a small but significant tendency toward better performance under most types of perturbations by agents showing further cognitivebehavioural dependency on their environments. The third model evolves experience-dependent robust behaviour in embodied, one-legged walking agents. Evidence suggests that robustness is rooted in both internal and external dynamics, but robust motion emerges always from the systemin-coupling. The fourth model implements a historically dependent, mobile-object tracking task under sensorimotor perturbations. Results indicate two different modes of distribution, one in which inner controls necessarily depend on a set of specific environmental factors to exhibit behaviour, then these controls will be more vulnerable to perturbations on that set, and another for which these factors are equally sufficient for behaviours. Vulnerability to perturbations depends on the particular distribution.
In contrast to most existing approaches to the study of robustness, this thesis argues that behavioural robustness is better understood in the context of agent-environment dynamical couplings, not in terms of internal mechanisms. Such couplings, however, are not always the full determinants of robustness. Challenges and limitations of our approach are also identified for future studies
Networks of spiking neurons and plastic synapses: implementation and control
The brain is an incredible system with a computational power that goes further beyond those
of our standard computer. It consists of a network of 1011 neurons connected by about 1014
synapses: a massive parallel architecture that suggests that brain performs computation
according to completely new strategies which we are far from understanding.
To study the nervous system a reasonable starting point is to model its basic units,
neurons and synapses, extract the key features, and try to put them together in simple
controllable networks. The research group I have been working in focuses its attention on
the network dynamics and chooses to model neurons and synapses at a functional level: in
this work I consider network of integrate-and-fire neurons connected through synapses that
are plastic and bistable. A synapses is said to be plastic when, according to some kind of
internal dynamics, it is able to change the “strength”, the efficacy, of the connection between
the pre- and post-synaptic neuron. The adjective bistable refers to the number of stable
states of efficacy that a synapse can have; we consider synapses with two stable states:
potentiated (high efficacy) or depressed (low efficacy). The considered synaptic model is
also endowed with a new stop-learning mechanism particularly relevant when dealing with
highly correlated patterns.
The ability of this kind of systems of reproducing in simulation behaviors observed in
biological networks, give sense to an attempt of implementing in hardware the studied
network. This thesis situates at this point: the goal of this work is to design, control and
test hybrid analog-digital, biologically inspired, hardware systems that behave in agreement
with the theoretical and simulations predictions. This class of devices typically goes under
the name of neuromorphic VLSI (Very-Large-Scale Integration). Neuromorphic engineering
was born from the idea of designing bio-mimetic devices and represents a useful research
strategy that contributes to inspire new models, stimulates the theoretical research and that
proposes an effective way of implementing stand-alone power-efficient devices.
In this work I present two chips, a prototype and a larger device, that are a step towards
endowing VLSI, neuromorphic systems with autonomous learning capabilities adequate for
not too simple statistics of the stimuli to be learnt. The main novel features of these
chips are the implemented type of synaptic plasticity and the configurability of the synaptic
connectivity. The reported experimental results demonstrate that the circuits behave in
agreement with theoretical predictions and the advantages of the stop-learning synaptic
plasticity when highly correlated patterns have to be learnt. The high degree of flexibility
of these chips in the definition of the synaptic connectivity is relevant in the perspective of
using such devices as building blocks of parallel, distributed multi-chip architectures that
will allow to scale up the network dimensions to systems with interesting computational
abilities capable to interact with real-world stimuli
Simulation and Design of Biological and Biologically-Motivated Computing Systems
In life science, there is a great need in understandings of biological systems for
therapeutics, synthetic biology, and biomedical applications. However, complex behaviors
and dynamics of biological systems are hard to understand and design. In
the mean time, the design of traditional computer architectures faces challenges from
power consumption, device reliability, and process variations. In recent years, the
convergence of computer science, computer engineering and life science has enabled
new applications targeting the challenges from both engineering and biological fields.
On one hand, computer modeling and simulation provides quantitative analysis and
predictions of functions and behaviors of biological systems, and further facilitates
the design of synthetic biological systems. On the other hand, bio-inspired devices
and systems are designed for real world applications by mimicking biological functions
and behaviors. This dissertation develops techniques for modeling and analyzing
dynamic behaviors of biologically realistic genetic circuits and brain models
and design of brain-inspired computing systems. The stability of genetic memory
circuits is studied to understand its functions for its potential applications in synthetic
biology. Based on the electrical-equivalent models of biochemical reactions,
simulation techniques widely used for electronic systems are applied to provide quantitative
analysis capabilities. In particular, system-theoretical techniques are used
to study the dynamic behaviors of genetic memory circuits, where the notion of
stability boundary is employed to characterize the bistability of such circuits. To
facilitate the simulation-based studies of physiological and pathological behaviors in
brain disorders, we construct large-scale brain models with detailed cellular mechanisms.
By developing dedicated numerical techniques for brain simulation, the simulation speed is greatly improved such that dynamic simulation of large thalamocortical
models with more than one million multi-compartment neurons and
hundreds of synapses on commodity computer servers becomes feasible. Simulation
of such large model produces biologically meaningful results demonstrating the emergence
of sigma and delta waves in the early and deep stages of sleep, and suggesting
the underlying cellular mechanisms that may be responsible for generation of absence
seizure. Brain-inspired computing paradigms may offer promising solutions
to many challenges facing the main stream Von Neumann computer architecture.
To this end, we develop a biologically inspired learning system amenable to VLSI
implementation. The proposed solution consists of a digitized liquid state machine
(LSM) and a spike-based learning rule, providing a fully biologically inspired learning
paradigm. The key design parameters of this liquid state machine are optimized
to maximize the learning performance while considering hardware implementation
cost. When applied to speech recognition of isolated word using TI46 speech corpus,
the performance of the proposed LSM rivals several existing state-of-art techniques
including the Hidden Markov Model based recognizer Sphinx-4
- …