211 research outputs found
Learning to Discover Sparse Graphical Models
We consider structure discovery of undirected graphical models from
observational data. Inferring likely structures from few examples is a complex
task often requiring the formulation of priors and sophisticated inference
procedures. Popular methods rely on estimating a penalized maximum likelihood
of the precision matrix. However, in these approaches structure recovery is an
indirect consequence of the data-fit term, the penalty can be difficult to
adapt for domain-specific knowledge, and the inference is computationally
demanding. By contrast, it may be easier to generate training samples of data
that arise from graphs with the desired structure properties. We propose here
to leverage this latter source of information as training data to learn a
function, parametrized by a neural network that maps empirical covariance
matrices to estimated graph structures. Learning this function brings two
benefits: it implicitly models the desired structure or sparsity properties to
form suitable priors, and it can be tailored to the specific problem of edge
structure discovery, rather than maximizing data likelihood. Applying this
framework, we find our learnable graph-discovery method trained on synthetic
data generalizes well: identifying relevant edges in both synthetic and real
data, completely unknown at training time. We find that on genetics, brain
imaging, and simulation data we obtain performance generally superior to
analytical methods
Recommended from our members
Application of Deep Learning to Brain Connectivity Classification in Large MRI Datasets
The use of machine learning for whole-brain classification of magnetic resonance imaging (MRI) data is of clear interest, both for understanding phenotypic differences in brain structure and function and for diagnostic applications. Developments of deep learning models in the past decade have revolutionized photographic image and speech recognition, bringing promise to do the same to other fields of science. However, there are many practical and theoretical challenges in the translation of such methods to the unique context of MRIs of the brain. This thesis presents a theoretical underpinning for whole-brain classification of extremely large datasets of multi-site MRIs, including machine learning model architecture, dataset curation methods, machine learning visualization methods, encoding of MRI data, and feature extraction. To replicate large sample sizes typically applied to deep learning models, a dataset of over 50,000 functional and structural MRIs was amassed from nine different databases, and the undertaken analyses were conducted on three covariates commonly found across these collections: sex, resting state/task, and autism spectrum disorder. I find that deep learning is not only a method that has promise for clinical application in the future, but also a powerful statistical tool for analyzing complex, nonlinear relationships in brain data where conventional statistics may fail. However, results are also dependent on factors such as dataset imbalances, confounding factors such as motion and head size, selected methods of encoding MRI data, variability of machine learning models and selected methods of visualizing the machine learning results. In this thesis, I present the following methodological innovations: (1) a method of balancing datasets as a means of regressing out measurable confounding factors; (2) a means of removing spatial biases from deep learning visualization methods; (3) methods of encoding functional and structural datasets as connectivity matrices; (4) the use of ensemble models and convolutional neural network architectures to improve classification accuracy and consistency; (5) adaptation of deep learning visualization methods to study brain connections utilized in the classification process. Additionally, I discuss interpretations, limitations, and future directions of this research.Gates Cambridge Scholarshi
Role of deep learning in infant brain MRI analysis
Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them
- …