15,736 research outputs found
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Toward a dynamical systems analysis of neuromodulation
This work presents some first steps toward a
more thorough understanding of the control systems
employed in evolutionary robotics. In order
to choose an appropriate architecture or to construct
an effective novel control system we need
insights into what makes control systems successful,
robust, evolvable, etc. Here we present analysis
intended to shed light on this type of question
as it applies to a novel class of artificial neural
networks that include a neuromodulatory mechanism:
GasNets.
We begin by instantiating a particular GasNet
subcircuit responsible for tuneable pattern generation
and thought to underpin the attractive
property of “temporal adaptivity”. Rather than
work within the GasNet formalism, we develop an
extension of the well-known FitzHugh-Nagumo
equations. The continuous nature of our model
allows us to conduct a thorough dynamical systems
analysis and to draw parallels between this
subcircuit and beating/bursting phenomena reported
in the neuroscience literature.
We then proceed to explore the effects of different
types of parameter modulation on the system
dynamics. We conclude that while there are
key differences between the gain modulation used
in the GasNet and alternative schemes (including
threshold modulation of more traditional synaptic
input), both approaches are able to produce
tuneable pattern generation. While it appears, at
least in this study, that the GasNet’s gain modulation
may not be crucial to pattern generation ,
we go on to suggest some possible advantages it
could confer
Unsupervised learning of overlapping image components using divisive input modulation
This paper demonstrates that nonnegative matrix factorisation is mathematically related to a class of neural networks that employ negative feedback as a mechanism of competition. This observation inspires a novel learning algorithm which we call Divisive Input Modulation (DIM). The proposed algorithm provides a mathematically simple and computationally efficient method for the unsupervised learning of image components, even in conditions where these elementary features overlap considerably. To test the proposed algorithm, a novel artificial task is introduced which is similar to the frequently-used bars problem but employs squares rather than bars to increase the degree of overlap between components. Using this task, we investigate how the proposed method performs on the parsing of artificial images composed of overlapping features, given the correct representation of the individual components; and secondly, we investigate how well it can learn the elementary components from artificial training images. We compare the performance of the proposed algorithm with its predecessors including variations on these algorithms that have produced state-of-the-art performance on the bars problem. The proposed algorithm is more successful than its predecessors in dealing with overlap and occlusion in the artificial task that has been used to assess performance
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Presynaptic modulation as fast synaptic switching: state-dependent modulation of task performance
Neuromodulatory receptors in presynaptic position have the ability to
suppress synaptic transmission for seconds to minutes when fully engaged. This
effectively alters the synaptic strength of a connection. Much work on
neuromodulation has rested on the assumption that these effects are uniform at
every neuron. However, there is considerable evidence to suggest that
presynaptic regulation may be in effect synapse-specific. This would define a
second "weight modulation" matrix, which reflects presynaptic receptor efficacy
at a given site. Here we explore functional consequences of this hypothesis. By
analyzing and comparing the weight matrices of networks trained on different
aspects of a task, we identify the potential for a low complexity "modulation
matrix", which allows to switch between differently trained subtasks while
retaining general performance characteristics for the task. This means that a
given network can adapt itself to different task demands by regulating its
release of neuromodulators. Specifically, we suggest that (a) a network can
provide optimized responses for related classification tasks without the need
to train entirely separate networks and (b) a network can blend a "memory mode"
which aims at reproducing memorized patterns and a "novelty mode" which aims to
facilitate classification of new patterns. We relate this work to the known
effects of neuromodulators on brain-state dependent processing.Comment: 6 pages, 13 figure
Flexible couplings: diffusing neuromodulators and adaptive robotics
Recent years have seen the discovery of freely diffusing gaseous neurotransmitters, such as nitric oxide (NO), in biological nervous systems. A type of artificial neural network (ANN) inspired by such gaseous signaling, the GasNet, has previously been shown to be more evolvable than traditional ANNs when used as an artificial nervous system in an evolutionary robotics setting, where evolvability means consistent speed to very good solutions¿here, appropriate sensorimotor behavior-generating systems. We present two new versions of the GasNet, which take further inspiration from the properties of neuronal gaseous signaling. The plexus model is inspired by the extraordinary NO-producing cortical plexus structure of neural fibers and the properties of the diffusing NO signal it generates. The receptor model is inspired by the mediating action of neurotransmitter receptors. Both models are shown to significantly further improve evolvability. We describe a series of analyses suggesting that the reasons for the increase in evolvability are related to the flexible loose coupling of distinct signaling mechanisms, one ¿chemical¿ and one ¿electrical.
Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Spiking neural networks (SNN) are artificial computational models that have
been inspired by the brain's ability to naturally encode and process
information in the time domain. The added temporal dimension is believed to
render them more computationally efficient than the conventional artificial
neural networks, though their full computational capabilities are yet to be
explored. Recently, computational memory architectures based on non-volatile
memory crossbar arrays have shown great promise to implement parallel
computations in artificial and spiking neural networks. In this work, we
experimentally demonstrate for the first time, the feasibility to realize
high-performance event-driven in-situ supervised learning systems using
nanoscale and stochastic phase-change synapses. Our SNN is trained to recognize
audio signals of alphabets encoded using spikes in the time domain and to
generate spike trains at precise time instances to represent the pixel
intensities of their corresponding images. Moreover, with a statistical model
capturing the experimental behavior of the devices, we investigate
architectural and systems-level solutions for improving the training and
inference performance of our computational memory-based system. Combining the
computational potential of supervised SNNs with the parallel compute power of
computational memory, the work paves the way for next-generation of efficient
brain-inspired systems
A geographically distributed bio-hybrid neural network with memristive plasticity
Throughout evolution the brain has mastered the art of processing real-world
inputs through networks of interlinked spiking neurons. Synapses have emerged
as key elements that, owing to their plasticity, are merging neuron-to-neuron
signalling with memory storage and computation. Electronics has made important
steps in emulating neurons through neuromorphic circuits and synapses with
nanoscale memristors, yet novel applications that interlink them in
heterogeneous bio-inspired and bio-hybrid architectures are just beginning to
materialise. The use of memristive technologies in brain-inspired architectures
for computing or for sensing spiking activity of biological neurons8 are only
recent examples, however interlinking brain and electronic neurons through
plasticity-driven synaptic elements has remained so far in the realm of the
imagination. Here, we demonstrate a bio-hybrid neural network (bNN) where
memristors work as "synaptors" between rat neural circuits and VLSI neurons.
The two fundamental synaptors, from artificial-to-biological (ABsyn) and from
biological-to- artificial (BAsyn), are interconnected over the Internet. The
bNN extends across Europe, collapsing spatial boundaries existing in natural
brain networks and laying the foundations of a new geographically distributed
and evolving architecture: the Internet of Neuro-electronics (IoN).Comment: 16 pages, 10 figure
A roadmap to integrate astrocytes into Systems Neuroscience.
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease
- …