811 research outputs found
Early Prediction of Alzheimer's Disease Dementia Based on Baseline Hippocampal MRI and 1-Year Follow-Up Cognitive Measures Using Deep Recurrent Neural Networks
Multi-modal biological, imaging, and neuropsychological markers have
demonstrated promising performance for distinguishing Alzheimer's disease (AD)
patients from cognitively normal elders. However, it remains difficult to early
predict when and which mild cognitive impairment (MCI) individuals will convert
to AD dementia. Informed by pattern classification studies which have
demonstrated that pattern classifiers built on longitudinal data could achieve
better classification performance than those built on cross-sectional data, we
develop a deep learning model based on recurrent neural networks (RNNs) to
learn informative representation and temporal dynamics of longitudinal
cognitive measures of individual subjects and combine them with baseline
hippocampal MRI for building a prognostic model of AD dementia progression.
Experimental results on a large cohort of MCI subjects have demonstrated that
the deep learning model could learn informative measures from longitudinal data
for characterizing the progression of MCI subjects to AD dementia, and the
prognostic model could early predict AD progression with high accuracy.Comment: Accepted by ISBI 201
Non-rigid image registration using fully convolutional networks with deep self-supervision
We propose a novel non-rigid image registration algorithm that is built upon
fully convolutional networks (FCNs) to optimize and learn spatial
transformations between pairs of images to be registered. Different from most
existing deep learning based image registration methods that learn spatial
transformations from training data with known corresponding spatial
transformations, our method directly estimates spatial transformations between
pairs of images by maximizing an image-wise similarity metric between fixed and
deformed moving images, similar to conventional image registration algorithms.
At the same time, our method also learns FCNs for encoding the spatial
transformations at the same spatial resolution of images to be registered,
rather than learning coarse-grained spatial transformation information. The
image registration is implemented in a multi-resolution image registration
framework to jointly optimize and learn spatial transformations and FCNs at
different resolutions with deep self-supervision through typical feedforward
and backpropagation computation. Since our method simultaneously optimizes and
learns spatial transformations for the image registration, our method can be
directly used to register a pair of images, and the registration of a set of
images is also a training procedure for FCNs so that the trained FCNs can be
directly adopted to register new images by feedforward computation of the
learned FCNs without any optimization. The proposed method has been evaluated
for registering 3D structural brain magnetic resonance (MR) images and obtained
better performance than state-of-the-art image registration algorithms
Feature-Fused Context-Encoding Network for Neuroanatomy Segmentation
Automatic segmentation of fine-grained brain structures remains a challenging
task. Current segmentation methods mainly utilize 2D and 3D deep neural
networks. The 2D networks take image slices as input to produce coarse
segmentation in less processing time, whereas the 3D networks take the whole
image volumes to generated fine-detailed segmentation with more computational
burden. In order to obtain accurate fine-grained segmentation efficiently, in
this paper, we propose an end-to-end Feature-Fused Context-Encoding Network for
brain structure segmentation from MR (magnetic resonance) images. Our model is
implemented based on a 2D convolutional backbone, which integrates a 2D
encoding module to acquire planar image features and a spatial encoding module
to extract spatial context information. A global context encoding module is
further introduced to capture global context semantics from the fused 2D
encoding and spatial features. The proposed network aims to fully leverage the
global anatomical prior knowledge learned from context semantics, which is
represented by a structure-aware attention factor to recalibrate the outputs of
the network. In this way, the network is guaranteed to be aware of the
class-dependent feature maps to facilitate the segmentation. We evaluate our
model on 2012 Brain Multi-Atlas Labelling Challenge dataset for 134
fine-grained structure segmentation. Besides, we validate our network on 27
coarse structure segmentation tasks. Experimental results have demonstrated
that our model can achieve improved performance compared with the
state-of-the-art approaches
Identification of multi-scale hierarchical brain functional networks using deep matrix factorization
We present a deep semi-nonnegative matrix factorization method for
identifying subject-specific functional networks (FNs) at multiple spatial
scales with a hierarchical organization from resting state fMRI data. Our
method is built upon a deep semi-nonnegative matrix factorization framework to
jointly detect the FNs at multiple scales with a hierarchical organization,
enhanced by group sparsity regularization that helps identify subject-specific
FNs without loss of inter-subject comparability. The proposed method has been
validated for predicting subject-specific functional activations based on
functional connectivity measures of the hierarchical multi-scale FNs of the
same subjects. Experimental results have demonstrated that our method could
obtain subject-specific multi-scale hierarchical FNs and their functional
connectivity measures across different scales could better predict
subject-specific functional activations than those obtained by alternative
techniques.Comment: Accepted by MICCAI 201
A deep learning model for early prediction of Alzheimer's disease dementia based on hippocampal MRI
Introduction: It is challenging at baseline to predict when and which
individuals who meet criteria for mild cognitive impairment (MCI) will
ultimately progress to Alzheimer's disease (AD) dementia. Methods: A deep
learning method is developed and validated based on MRI scans of 2146 subjects
(803 for training and 1343 for validation) to predict MCI subjects' progression
to AD dementia in a time-to-event analysis setting. Results: The deep learning
time-to-event model predicted individual subjects' progression to AD dementia
with a concordance index (C-index) of 0.762 on 439 ADNI testing MCI subjects
with follow-up duration from 6 to 78 months (quartiles: [24, 42, 54]) and a
C-index of 0.781 on 40 AIBL testing MCI subjects with follow-up duration from
18-54 months (quartiles: [18, 36,54]). The predicted progression risk also
clustered individual subjects into subgroups with significant differences in
their progression time to AD dementia (p<0.0002). Improved performance for
predicting progression to AD dementia (C-index=0.864) was obtained when the
deep learning based progression risk was combined with baseline clinical
measures. Conclusion: Our method provides a cost effective and accurate means
for prognosis and potentially to facilitate enrollment in clinical trials with
individuals likely to progress within a specific temporal period.Comment: Accepted for publication in Alzheimer's & Dementi
Optimal Task Scheduling in Communication-Constrained Mobile Edge Computing Systems for Wireless Virtual Reality
Mobile edge computing (MEC) is expected to be an effective solution to
deliver 360-degree virtual reality (VR) videos over wireless networks. In
contrast to previous computation-constrained MEC framework, which reduces the
computation-resource consumption at the mobile VR device by increasing the
communication-resource consumption, we develop a communications-constrained MEC
framework to reduce communication-resource consumption by increasing the
computation-resource consumption and exploiting the caching resources at the
mobile VR device in this paper. Specifically, according to the task
modularization, the MEC server can only deliver the components which have not
been stored in the VR device, and then the VR device uses the received
components and the corresponding cached components to construct the task,
resulting in low communication-resource consumption but high delay. The MEC
server can also compute the task by itself to reduce the delay, however, it
consumes more communication-resource due to the delivery of entire task.
Therefore, we then propose a task scheduling strategy to decide which
computation model should the MEC server operates, in order to minimize the
communication-resource consumption under the delay constraint. Finally, we
discuss the tradeoffs between communications, computing, and caching in the
proposed system.Comment: submitted to APCC 201
Ordinal Distribution Regression for Gait-based Age Estimation
Computer vision researchers prefer to estimate age from face images because
facial features provide useful information. However, estimating age from face
images becomes challenging when people are distant from the camera or occluded.
A person's gait is a unique biometric feature that can be perceived efficiently
even at a distance. Thus, gait can be used to predict age when face images are
not available. However, existing gait-based classification or regression
methods ignore the ordinal relationship of different ages, which is an
important clue for age estimation. This paper proposes an ordinal distribution
regression with a global and local convolutional neural network for gait-based
age estimation. Specifically, we decompose gait-based age regression into a
series of binary classifications to incorporate the ordinal age information.
Then, an ordinal distribution loss is proposed to consider the inner
relationships among these classifications by penalizing the distribution
discrepancy between the estimated value and the ground truth. In addition, our
neural network comprises a global and three local sub-networks, and thus, is
capable of learning the global structure and local details from the head, body,
and feet. Experimental results indicate that the proposed approach outperforms
state-of-the-art gait-based age estimation methods on the OULP-Age dataset.Comment: Accepted by the journal of "SCIENCE CHINA Information Sciences
Unsupervised deep learning for individualized brain functional network identification
A novel unsupervised deep learning method is developed to identify
individual-specific large scale brain functional networks (FNs) from
resting-state fMRI (rsfMRI) in an end-to-end learning fashion. Our method
leverages deep Encoder-Decoder networks and conventional brain decomposition
models to identify individual-specific FNs in an unsupervised learning
framework and facilitate fast inference for new individuals with one forward
pass of the deep network. Particularly, convolutional neural networks (CNNs)
with an Encoder-Decoder architecture are adopted to identify
individual-specific FNs from rsfMRI data by optimizing their data fitting and
sparsity regularization terms that are commonly used in brain decomposition
models. Moreover, a time-invariant representation learning module is designed
to learn features invariant to temporal orders of time points of rsfMRI data.
The proposed method has been validated based on a large rsfMRI dataset and
experimental results have demonstrated that our method could obtain
individual-specific FNs which are consistent with well-established FNs and are
informative for predicting brain age, indicating that the individual-specific
FNs identified truly captured the underlying variability of individualized
functional neuroanatomy
MDReg-Net: Multi-resolution diffeomorphic image registration using fully convolutional networks with deep self-supervision
We present a diffeomorphic image registration algorithm to learn spatial
transformations between pairs of images to be registered using fully
convolutional networks (FCNs) under a self-supervised learning setting. The
network is trained to estimate diffeomorphic spatial transformations between
pairs of images by maximizing an image-wise similarity metric between fixed and
warped moving images, similar to conventional image registration algorithms. It
is implemented in a multi-resolution image registration framework to optimize
and learn spatial transformations at different image resolutions jointly and
incrementally with deep self-supervision in order to better handle large
deformation between images. A spatial Gaussian smoothing kernel is integrated
with the FCNs to yield sufficiently smooth deformation fields to achieve
diffeomorphic image registration. Particularly, spatial transformations learned
at coarser resolutions are utilized to warp the moving image, which is
subsequently used for learning incremental transformations at finer
resolutions. This procedure proceeds recursively to the full image resolution
and the accumulated transformations serve as the final transformation to warp
the moving image at the finest resolution. Experimental results for registering
high resolution 3D structural brain magnetic resonance (MR) images have
demonstrated that image registration networks trained by our method obtain
robust, diffeomorphic image registration results within seconds with improved
accuracy compared with state-of-the-art image registration algorithms
ACEnet: Anatomical Context-Encoding Network for Neuroanatomy Segmentation
Segmentation of brain structures from magnetic resonance (MR) scans plays an
important role in the quantification of brain morphology. Since 3D deep
learning models suffer from high computational cost, 2D deep learning methods
are favored for their computational efficiency. However, existing 2D deep
learning methods are not equipped to effectively capture 3D spatial contextual
information that is needed to achieve accurate brain structure segmentation. In
order to overcome this limitation, we develop an Anatomical Context-Encoding
Network (ACEnet) to incorporate 3D spatial and anatomical contexts in 2D
convolutional neural networks (CNNs) for efficient and accurate segmentation of
brain structures from MR scans, consisting of 1) an anatomical context encoding
module to incorporate anatomical information in 2D CNNs and 2) a spatial
context encoding module to integrate 3D image information in 2D CNNs. In
addition, a skull stripping module is adopted to guide the 2D CNNs to attend to
the brain. Extensive experiments on three benchmark datasets have demonstrated
that our method outperforms state-of-the-art alternative methods for brain
structure segmentation in terms of both computational efficiency and
segmentation accuracy. Source code of this study is available at
https://github.com/ymli39/ACEnet-for-Neuroanatomy-Segmentation
- …