10,370 research outputs found
Deep Chronnectome Learning via Full Bidirectional Long Short-Term Memory Networks for MCI Diagnosis
Brain functional connectivity (FC) extracted from resting-state fMRI
(RS-fMRI) has become a popular approach for disease diagnosis, where
discriminating subjects with mild cognitive impairment (MCI) from normal
controls (NC) is still one of the most challenging problems. Dynamic functional
connectivity (dFC), consisting of time-varying spatiotemporal dynamics, may
characterize "chronnectome" diagnostic information for improving MCI
classification. However, most of the current dFC studies are based on detecting
discrete major brain status via spatial clustering, which ignores rich
spatiotemporal dynamics contained in such chronnectome. We propose Deep
Chronnectome Learning for exhaustively mining the comprehensive information,
especially the hidden higher-level features, i.e., the dFC time series that may
add critical diagnostic power for MCI classification. To this end, we devise a
new Fully-connected Bidirectional Long Short-Term Memory Network (Full-BiLSTM)
to effectively learn the periodic brain status changes using both past and
future information for each brief time segment and then fuse them to form the
final output. We have applied our method to a rigorously built large-scale
multi-site database (i.e., with 164 data from NCs and 330 from MCIs, which can
be further augmented by 25 folds). Our method outperforms other
state-of-the-art approaches with an accuracy of 73.6% under solid
cross-validations. We also made extensive comparisons among multiple variants
of LSTM models. The results suggest high feasibility of our method with
promising value also for other brain disorder diagnoses.Comment: The paper has been accepted by MICCAI201
Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control
It is widely accepted that the complex dynamics characteristic of recurrent
neural circuits contributes in a fundamental manner to brain function. Progress
has been slow in understanding and exploiting the computational power of
recurrent dynamics for two main reasons: nonlinear recurrent networks often
exhibit chaotic behavior and most known learning rules do not work in robust
fashion in recurrent networks. Here we address both these problems by
demonstrating how random recurrent networks (RRN) that initially exhibit
chaotic dynamics can be tuned through a supervised learning rule to generate
locally stable neural patterns of activity that are both complex and robust to
noise. The outcome is a novel neural network regime that exhibits both
transiently stable and chaotic trajectories. We further show that the recurrent
learning rule dramatically increases the ability of RRNs to generate complex
spatiotemporal motor patterns, and accounts for recent experimental data
showing a decrease in neural variability in response to stimulus onset
- β¦