2,454 research outputs found
Dynamic Adaptive Computation: Tuning network states to task requirements
Neural circuits are able to perform computations under very diverse
conditions and requirements. The required computations impose clear constraints
on their fine-tuning: a rapid and maximally informative response to stimuli in
general requires decorrelated baseline neural activity. Such network dynamics
is known as asynchronous-irregular. In contrast, spatio-temporal integration of
information requires maintenance and transfer of stimulus information over
extended time periods. This can be realized at criticality, a phase transition
where correlations, sensitivity and integration time diverge. Being able to
flexibly switch, or even combine the above properties in a task-dependent
manner would present a clear functional advantage. We propose that cortex
operates in a "reverberating regime" because it is particularly favorable for
ready adaptation of computational properties to context and task. This
reverberating regime enables cortical networks to interpolate between the
asynchronous-irregular and the critical state by small changes in effective
synaptic strength or excitation-inhibition ratio. These changes directly adapt
computational properties, including sensitivity, amplification, integration
time and correlation length within the local network. We review recent
converging evidence that cortex in vivo operates in the reverberating regime,
and that various cortical areas have adapted their integration times to
processing requirements. In addition, we propose that neuromodulation enables a
fine-tuning of the network, so that local circuits can either decorrelate or
integrate, and quench or maintain their input depending on task. We argue that
this task-dependent tuning, which we call "dynamic adaptive computation",
presents a central organization principle of cortical networks and discuss
first experimental evidence.Comment: 6 pages + references, 2 figure
Toward bio-inspired information processing with networks of nano-scale switching elements
Unconventional computing explores multi-scale platforms connecting
molecular-scale devices into networks for the development of scalable
neuromorphic architectures, often based on new materials and components with
new functionalities. We review some work investigating the functionalities of
locally connected networks of different types of switching elements as
computational substrates. In particular, we discuss reservoir computing with
networks of nonlinear nanoscale components. In usual neuromorphic paradigms,
the network synaptic weights are adjusted as a result of a training/learning
process. In reservoir computing, the non-linear network acts as a dynamical
system mixing and spreading the input signals over a large state space, and
only a readout layer is trained. We illustrate the most important concepts with
a few examples, featuring memristor networks with time-dependent and history
dependent resistances
A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines
Information in neural networks is represented as weighted connections, or
synapses, between neurons. This poses a problem as the primary computational
bottleneck for neural networks is the vector-matrix multiply when inputs are
multiplied by the neural network weights. Conventional processing architectures
are not well suited for simulating neural networks, often requiring large
amounts of energy and time. Additionally, synapses in biological neural
networks are not binary connections, but exhibit a nonlinear response function
as neurotransmitters are emitted and diffuse between neurons. Inspired by
neuroscience principles, we present a digital neuromorphic architecture, the
Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex
synaptic response functions without requiring additional hardware components.
We consider the paradigm of spiking neurons with temporally coded information
as opposed to non-spiking rate coded neurons used in most neural networks. In
this paradigm we examine liquid state machines applied to speech recognition
and show how a liquid state machine with temporal dynamics maps onto the
STPU-demonstrating the flexibility and efficiency of the STPU for instantiating
neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN
Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation
We investigate the performance of sparsely-connected networks of
integrate-and-fire neurons for ultra-short term information processing. We
exploit the fact that the population activity of networks with balanced
excitation and inhibition can switch from an oscillatory firing regime to a
state of asynchronous irregular firing or quiescence depending on the rate of
external background spikes.
We find that in terms of information buffering the network performs best for
a moderate, non-zero, amount of noise. Analogous to the phenomenon of
stochastic resonance the performance decreases for higher and lower noise
levels. The optimal amount of noise corresponds to the transition zone between
a quiescent state and a regime of stochastic dynamics. This provides a
potential explanation on the role of non-oscillatory population activity in a
simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9
Spatio-temporal Learning with Arrays of Analog Nanosynapses
Emerging nanodevices such as resistive memories are being considered for
hardware realizations of a variety of artificial neural networks (ANNs),
including highly promising online variants of the learning approaches known as
reservoir computing (RC) and the extreme learning machine (ELM). We propose an
RC/ELM inspired learning system built with nanosynapses that performs both
on-chip projection and regression operations. To address time-dynamic tasks,
the hidden neurons of our system perform spatio-temporal integration and can be
further enhanced with variable sampling or multiple activation windows. We
detail the system and show its use in conjunction with a highly analog
nanosynapse device on a standard task with intrinsic timing dynamics- the TI-46
battery of spoken digits. The system achieves nearly perfect (99%) accuracy at
sufficient hidden layer size, which compares favorably with software results.
In addition, the model is extended to a larger dataset, the MNIST database of
handwritten digits. By translating the database into the time domain and using
variable integration windows, up to 95% classification accuracy is achieved. In
addition to an intrinsically low-power programming style, the proposed
architecture learns very quickly and can easily be converted into a spiking
system with negligible loss in performance- all features that confer
significant energy efficiency.Comment: 6 pages, 3 figures. Presented at 2017 IEEE/ACM Symposium on Nanoscale
architectures (NANOARCH
- …