2,404 research outputs found
A Survey on Deep Learning for Neuroimaging-based Brain Disorder Analysis
Deep learning has been recently used for the analysis of neuroimages, such as
structural magnetic resonance imaging (MRI), functional MRI, and positron
emission tomography (PET), and has achieved significant performance
improvements over traditional machine learning in computer-aided diagnosis of
brain disorders. This paper reviews the applications of deep learning methods
for neuroimaging-based brain disorder analysis. We first provide a
comprehensive overview of deep learning techniques and popular network
architectures, by introducing various types of deep neural networks and recent
developments. We then review deep learning methods for computer-aided analysis
of four typical brain disorders, including Alzheimer's disease, Parkinson's
disease, Autism spectrum disorder, and Schizophrenia, where the first two
diseases are neurodegenerative disorders and the last two are
neurodevelopmental and psychiatric disorders, respectively. More importantly,
we discuss the limitations of existing studies and present possible future
directions.Comment: 30 pages, 7 figure
Learning Neural Markers of Schizophrenia Disorder Using Recurrent Neural Networks
Smart systems that can accurately diagnose patients with mental disorders and
identify effective treatments based on brain functional imaging data are of
great applicability and are gaining much attention. Most previous machine
learning studies use hand-designed features, such as functional connectivity,
which does not maintain the potential useful information in the spatial
relationship between brain regions and the temporal profile of the signal in
each region. Here we propose a new method based on recurrent-convolutional
neural networks to automatically learn useful representations from segments of
4-D fMRI recordings. Our goal is to exploit both spatial and temporal
information in the functional MRI movie (at the whole-brain voxel level) for
identifying patients with schizophrenia.Comment: To be published as a workshop paper at NIPS 2017 Machine Learning for
Health (ML4H
Using Human Brain Activity to Guide Machine Learning
Machine learning is a field of computer science that builds algorithms that
learn. In many cases, machine learning algorithms are used to recreate a human
ability like adding a caption to a photo, driving a car, or playing a game.
While the human brain has long served as a source of inspiration for machine
learning, little effort has been made to directly use data collected from
working brains as a guide for machine learning algorithms. Here we demonstrate
a new paradigm of "neurally-weighted" machine learning, which takes fMRI
measurements of human brain activity from subjects viewing images, and infuses
these data into the training process of an object recognition learning
algorithm to make it more consistent with the human brain. After training,
these neurally-weighted classifiers are able to classify images without
requiring any additional neural data. We show that our neural-weighting
approach can lead to large performance gains when used with traditional machine
vision features, as well as to significant improvements with already
high-performing convolutional neural network features. The effectiveness of
this approach points to a path forward for a new class of hybrid machine
learning algorithms which take both inspiration and direct constraints from
neuronal data.Comment: Supplemental material can be downloaded here:
http://www.wjscheirer.com/misc/activity_weights/fong-et-al-supplementary.pd
Ensemble learning with 3D convolutional neural networks for connectome-based prediction
The specificty and sensitivity of resting state functional MRI (rs-fMRI)
measurements depend on pre-processing choices, such as the parcellation scheme
used to define regions of interest (ROIs). In this study, we critically
evaluate the effect of brain parcellations on machine learning models applied
to rs-fMRI data. Our experiments reveal a remarkable trend: On average, models
with stochastic parcellations consistently perform as well as models with
widely used atlases at the same spatial scale. We thus propose an ensemble
learning strategy to combine the predictions from models trained on
connectivity data extracted using different (e.g., stochastic) parcellations.
We further present an implementation of our ensemble learning strategy with a
novel 3D Convolutional Neural Network (CNN) approach. The proposed CNN approach
takes advantage of the full-resolution 3D spatial structure of rs-fMRI data and
fits non-linear predictive models. Our ensemble CNN framework overcomes the
limitations of traditional machine learning models for connectomes that often
rely on region-based summary statistics and/or linear models. We showcase our
approach on a classification (autism patients versus healthy controls) and a
regression problem (prediction of subject's age), and report promising results.Comment: 45 pages, 9 figures, 4 supplementary figures (To appear in
Neuroimage
ASD-DiagNet: A hybrid learning approach for detection of Autism Spectrum Disorder using fMRI data
Mental disorders such as Autism Spectrum Disorders (ASD) are heterogeneous
disorders that are notoriously difficult to diagnose, especially in children.
The current psychiatric diagnostic process is based purely on the behavioural
observation of symptomology (DSM-5/ICD-10) and may be prone to over-prescribing
of drugs due to misdiagnosis. In order to move the field towards more
quantitative fashion, we need advanced and scalable machine learning
infrastructure that will allow us to identify reliable biomarkers of mental
health disorders. In this paper, we propose a framework called ASD-DiagNet for
classifying subjects with ASD from healthy subjects by using only fMRI data. We
designed and implemented a joint learning procedure using an autoencoder and a
single layer perceptron which results in improved quality of extracted features
and optimized parameters for the model. Further, we designed and implemented a
data augmentation strategy, based on linear interpolation on available feature
vectors, that allows us to produce synthetic datasets needed for training of
machine learning models. The proposed approach is evaluated on a public dataset
provided by Autism Brain Imaging Data Exchange including 1035 subjects coming
from 17 different brain imaging centers. Our machine learning model outperforms
other state of the art methods from 13 imaging centers with increase in
classification accuracy up to 20% with maximum accuracy of 80%. The machine
learning technique presented in this paper, in addition to yielding better
quality, gives enormous advantages in terms of execution time (40 minutes vs. 6
hours on other methods). The implemented code is available as GPL license on
GitHub portal of our lab (https://github.com/pcdslab/ASD-DiagNet)
Classification of EEG-Based Brain Connectivity Networks in Schizophrenia Using a Multi-Domain Connectome Convolutional Neural Network
We exploit altered patterns in brain functional connectivity as features for
automatic discriminative analysis of neuropsychiatric patients. Deep learning
methods have been introduced to functional network classification only very
recently for fMRI, and the proposed architectures essentially focused on a
single type of connectivity measure. We propose a deep convolutional neural
network (CNN) framework for classification of electroencephalogram
(EEG)-derived brain connectome in schizophrenia (SZ). To capture complementary
aspects of disrupted connectivity in SZ, we explore combination of various
connectivity features consisting of time and frequency-domain metrics of
effective connectivity based on vector autoregressive model and partial
directed coherence, and complex network measures of network topology. We design
a novel multi-domain connectome CNN (MDC-CNN) based on a parallel ensemble of
1D and 2D CNNs to integrate the features from various domains and dimensions
using different fusion strategies. Hierarchical latent representations learned
by the multiple convolutional layers from EEG connectivity reveal apparent
group differences between SZ and healthy controls (HC). Results on a large
resting-state EEG dataset show that the proposed CNNs significantly outperform
traditional support vector machine classifiers. The MDC-CNN with combined
connectivity features further improves performance over single-domain CNNs
using individual features, achieving remarkable accuracy of with a
decision-level fusion. The proposed MDC-CNN by integrating information from
diverse brain connectivity descriptors is able to accurately discriminate SZ
from HC. The new framework is potentially useful for developing diagnostic
tools for SZ and other disorders.Comment: 15 pages, 9 figure
Classification of Alzheimer's Disease using fMRI Data and Deep Learning Convolutional Neural Networks
Over the past decade, machine learning techniques especially predictive
modeling and pattern recognition in biomedical sciences from drug delivery
system to medical imaging has become one of the important methods which are
assisting researchers to have deeper understanding of entire issue and to solve
complex medical problems. Deep learning is power learning machine learning
algorithm in classification while extracting high-level features. In this
paper, we used convolutional neural network to classify Alzheimer's brain from
normal healthy brain. The importance of classifying this kind of medical data
is to potentially develop a predict model or system in order to recognize the
type disease from normal subjects or to estimate the stage of the disease.
Classification of clinical data such as Alzheimer's disease has been always
challenging and most problematic part has been always selecting the most
discriminative features. Using Convolutional Neural Network (CNN) and the
famous architecture LeNet-5, we successfully classified functional MRI data of
Alzheimer's subjects from normal controls where the accuracy of test data on
trained data reached 96.85%. This experiment suggests us the shift and scale
invariant features extracted by CNN followed by deep learning classification is
most powerful method to distinguish clinical data from healthy data in fMRI.
This approach also enables us to expand our methodology to predict more
complicated systems
3D Inception-based CNN with sMRI and MD-DTI data fusion for Alzheimer's Disease diagnostics
In the last decade, computer-aided early diagnostics of Alzheimer's Disease
(AD) and its prodromal form, Mild Cognitive Impairment (MCI), has been the
subject of extensive research. Some recent studies have shown promising results
in the AD and MCI determination using structural and functional Magnetic
Resonance Imaging (sMRI, fMRI), Positron Emission Tomography (PET) and
Diffusion Tensor Imaging (DTI) modalities. Furthermore, fusion of imaging
modalities in a supervised machine learning framework has shown promising
direction of research.
In this paper we first review major trends in automatic classification
methods such as feature extraction based methods as well as deep learning
approaches in medical image analysis applied to the field of Alzheimer's
Disease diagnostics. Then we propose our own design of a 3D Inception-based
Convolutional Neural Network (CNN) for Alzheimer's Disease diagnostics. The
network is designed with an emphasis on the interior resource utilization and
uses sMRI and DTI modalities fusion on hippocampal ROI. The comparison with the
conventional AlexNet-based network using data from the Alzheimer's Disease
Neuroimaging Initiative (ADNI) dataset (http://adni.loni.usc.edu) demonstrates
significantly better performance of the proposed 3D Inception-based CNN.Comment: arXiv admin note: substantial text overlap with arXiv:1801.0596
Robust Spatial Filtering with Graph Convolutional Neural Networks
Convolutional Neural Networks (CNNs) have recently led to incredible
breakthroughs on a variety of pattern recognition problems. Banks of finite
impulse response filters are learned on a hierarchy of layers, each
contributing more abstract information than the previous layer. The simplicity
and elegance of the convolutional filtering process makes them perfect for
structured problems such as image, video, or voice, where vertices are
homogeneous in the sense of number, location, and strength of neighbors. The
vast majority of classification problems, for example in the pharmaceutical,
homeland security, and financial domains are unstructured. As these problems
are formulated into unstructured graphs, the heterogeneity of these problems,
such as number of vertices, number of connections per vertex, and edge
strength, cannot be tackled with standard convolutional techniques. We propose
a novel neural learning framework that is capable of handling both homogeneous
and heterogeneous data, while retaining the benefits of traditional CNN
successes.
Recently, researchers have proposed variations of CNNs that can handle graph
data. In an effort to create learnable filter banks of graphs, these methods
either induce constraints on the data or require preprocessing. As opposed to
spectral methods, our framework, which we term Graph-CNNs, defines filters as
polynomials of functions of the graph adjacency matrix. Graph-CNNs can handle
both heterogeneous and homogeneous graph data, including graphs having entirely
different vertex or edge sets. We perform experiments to validate the
applicability of Graph-CNNs to a variety of structured and unstructured
classification problems and demonstrate state-of-the-art results on document
and molecule classification problems
Radiological images and machine learning: trends, perspectives, and prospects
The application of machine learning to radiological images is an increasingly
active research area that is expected to grow in the next five to ten years.
Recent advances in machine learning have the potential to recognize and
classify complex patterns from different radiological imaging modalities such
as x-rays, computed tomography, magnetic resonance imaging and positron
emission tomography imaging. In many applications, machine learning based
systems have shown comparable performance to human decision-making. The
applications of machine learning are the key ingredients of future clinical
decision making and monitoring systems. This review covers the fundamental
concepts behind various machine learning techniques and their applications in
several radiological imaging areas, such as medical image segmentation, brain
function studies and neurological disease diagnosis, as well as computer-aided
systems, image registration, and content-based image retrieval systems.
Synchronistically, we will briefly discuss current challenges and future
directions regarding the application of machine learning in radiological
imaging. By giving insight on how take advantage of machine learning powered
applications, we expect that clinicians can prevent and diagnose diseases more
accurately and efficiently.Comment: 13 figure
- …