55,237 research outputs found
Hierarchical Temporal Representation in Linear Reservoir Computing
Recently, studies on deep Reservoir Computing (RC) highlighted the role of
layering in deep recurrent neural networks (RNNs). In this paper, the use of
linear recurrent units allows us to bring more evidence on the intrinsic
hierarchical temporal representation in deep RNNs through frequency analysis
applied to the state signals. The potentiality of our approach is assessed on
the class of Multiple Superimposed Oscillator tasks. Furthermore, our
investigation provides useful insights to open a discussion on the main aspects
that characterize the deep learning framework in the temporal domain.Comment: This is a pre-print of the paper submitted to the 27th Italian
Workshop on Neural Networks, WIRN 201
Resting-state fMRI using passband balanced steady-state free precession
OBJECTIVE: Resting-state functional MRI (rsfMRI) has been increasingly used for understanding brain functional architecture. To date, most rsfMRI studies have exploited blood oxygenation level-dependent (BOLD) contrast using gradient-echo (GE) echo planar imaging (EPI), which can suffer from image distortion and signal dropout due to magnetic susceptibility and inherent long echo time. In this study, the feasibility of passband balanced steady-state free precession (bSSFP) imaging for distortion-free and high-resolution rsfMRI was investigated. METHODS: rsfMRI was performed in humans at 3 T and in rats at 7 T using bSSFP with short repetition time (TR = 4/2.5 ms respectively) in comparison with conventional GE-EPI. Resting-state networks (RSNs) were detected using independent component analysis. RESULTS AND SIGNIFICANCE: RSNs derived from bSSFP images were shown to be spatially and spectrally comparable to those derived from GE-EPI images with considerable intra- and inter-subject reproducibility. High-resolution bSSFP images corresponded well to the anatomical images, with RSNs exquisitely co-localized to the gray matter. Furthermore, RSNs at areas of severe susceptibility such as human anterior prefrontal cortex and rat piriform cortex were proved accessible. These findings demonstrated for the first time that passband bSSFP approach can be a promising alternative to GE-EPI for rsfMRI. It offers distortion-free and high-resolution RSNs and is potentially suited for high field studies.published_or_final_versio
Dynamic Adaptive Computation: Tuning network states to task requirements
Neural circuits are able to perform computations under very diverse
conditions and requirements. The required computations impose clear constraints
on their fine-tuning: a rapid and maximally informative response to stimuli in
general requires decorrelated baseline neural activity. Such network dynamics
is known as asynchronous-irregular. In contrast, spatio-temporal integration of
information requires maintenance and transfer of stimulus information over
extended time periods. This can be realized at criticality, a phase transition
where correlations, sensitivity and integration time diverge. Being able to
flexibly switch, or even combine the above properties in a task-dependent
manner would present a clear functional advantage. We propose that cortex
operates in a "reverberating regime" because it is particularly favorable for
ready adaptation of computational properties to context and task. This
reverberating regime enables cortical networks to interpolate between the
asynchronous-irregular and the critical state by small changes in effective
synaptic strength or excitation-inhibition ratio. These changes directly adapt
computational properties, including sensitivity, amplification, integration
time and correlation length within the local network. We review recent
converging evidence that cortex in vivo operates in the reverberating regime,
and that various cortical areas have adapted their integration times to
processing requirements. In addition, we propose that neuromodulation enables a
fine-tuning of the network, so that local circuits can either decorrelate or
integrate, and quench or maintain their input depending on task. We argue that
this task-dependent tuning, which we call "dynamic adaptive computation",
presents a central organization principle of cortical networks and discuss
first experimental evidence.Comment: 6 pages + references, 2 figure
Analog readout for optical reservoir computers
Reservoir computing is a new, powerful and flexible machine learning
technique that is easily implemented in hardware. Recently, by using a
time-multiplexed architecture, hardware reservoir computers have reached
performance comparable to digital implementations. Operating speeds allowing
for real time information operation have been reached using optoelectronic
systems. At present the main performance bottleneck is the readout layer which
uses slow, digital postprocessing. We have designed an analog readout suitable
for time-multiplexed optoelectronic reservoir computers, capable of working in
real time. The readout has been built and tested experimentally on a standard
benchmark task. Its performance is better than non-reservoir methods, with
ample room for further improvement. The present work thereby overcomes one of
the major limitations for the future development of hardware reservoir
computers.Comment: to appear in NIPS 201
Bidirectional deep-readout echo state networks
We propose a deep architecture for the classification of multivariate time
series. By means of a recurrent and untrained reservoir we generate a vectorial
representation that embeds temporal relationships in the data. To improve the
memorization capability, we implement a bidirectional reservoir, whose last
state captures also past dependencies in the input. We apply dimensionality
reduction to the final reservoir states to obtain compressed fixed size
representations of the time series. These are subsequently fed into a deep
feedforward network trained to perform the final classification. We test our
architecture on benchmark datasets and on a real-world use-case of blood
samples classification. Results show that our method performs better than a
standard echo state network and, at the same time, achieves results comparable
to a fully-trained recurrent network, but with a faster training
- …