400 research outputs found
Normative Modeling using Multimodal Variational Autoencoders to Identify Abnormal Brain Structural Patterns in Alzheimer Disease
Normative modelling is an emerging method for understanding the underlying
heterogeneity within brain disorders like Alzheimer Disease (AD) by quantifying
how each patient deviates from the expected normative pattern that has been
learned from a healthy control distribution. Since AD is a multifactorial
disease with more than one biological pathways, multimodal magnetic resonance
imaging (MRI) neuroimaging data can provide complementary information about the
disease heterogeneity. However, existing deep learning based normative models
on multimodal MRI data use unimodal autoencoders with a single encoder and
decoder that may fail to capture the relationship between brain measurements
extracted from different MRI modalities. In this work, we propose multi-modal
variational autoencoder (mmVAE) based normative modelling framework that can
capture the joint distribution between different modalities to identify
abnormal brain structural patterns in AD. Our multi-modal framework takes as
input Freesurfer processed brain region volumes from T1-weighted (cortical and
subcortical) and T2-weighed (hippocampal) scans of cognitively normal
participants to learn the morphological characteristics of the healthy brain.
The estimated normative model is then applied on Alzheimer Disease (AD)
patients to quantify the deviation in brain volumes and identify the abnormal
brain structural patterns due to the effect of the different AD stages. Our
experimental results show that modeling joint distribution between the multiple
MRI modalities generates deviation maps that are more sensitive to disease
staging within AD, have a better correlation with patient cognition and result
in higher number of brain regions with statistically significant deviations
compared to a unimodal baseline model with all modalities concatenated as a
single input.Comment: Medical Imaging Meets NeurIPS workshop in NeurIPS 202
Learning Disentangled Representations in the Imaging Domain
Disentangled representation learning has been proposed as an approach to
learning general representations even in the absence of, or with limited,
supervision. A good general representation can be fine-tuned for new target
tasks using modest amounts of data, or used directly in unseen domains
achieving remarkable performance in the corresponding task. This alleviation of
the data and annotation requirements offers tantalising prospects for
applications in computer vision and healthcare. In this tutorial paper, we
motivate the need for disentangled representations, present key theory, and
detail practical building blocks and criteria for learning such
representations. We discuss applications in medical imaging and computer vision
emphasising choices made in exemplar key works. We conclude by presenting
remaining challenges and opportunities.Comment: Submitted. This paper follows a tutorial style but also surveys a
considerable (more than 200 citations) number of work
Recommended from our members
Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions.
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research
Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research
Mixture polarization in inter-rater agreement analysis: a Bayesian nonparametric index
In several observational contexts where different raters evaluate a set of
items, it is common to assume that all raters draw their scores from the same
underlying distribution. However, a plenty of scientific works have evidenced
the relevance of individual variability in different type of rating tasks. To
address this issue the intra-class correlation coefficient (ICC) has been used
as a measure of variability among raters within the Hierarchical Linear Models
approach. A common distributional assumption in this setting is to specify
hierarchical effects as independent and identically distributed from a normal
with the mean parameter fixed to zero and unknown variance. The present work
aims to overcome this strong assumption in the inter-rater agreement estimation
by placing a Dirichlet Process Mixture over the hierarchical effects' prior
distribution. A new nonparametric index is proposed to quantify
raters polarization in presence of group heterogeneity. The model is applied on
a set of simulated experiments and real world data. Possible future directions
are discussed
Recommended from our members
Functional data analytics for wearable device and neuroscience data
This thesis uses methods from functional data analysis (FDA) to solve problems from three scientific areas of study. While the areas of application are quite distinct, the common thread of functional data analysis ties them together. The first chapter describes interactive open-source software for explaining and disseminating results of functional data analyses. Chapters two and three use curve alignment, or registration, to solve common problems in accelerometry and neuroimaging, respectively. The final chapter introduces a novel regression method for modeling functional outcomes that are trajectories over time. The first chapter of this thesis details a software package for interactively visualizing functional data analyses. The software is designed to work for a wide range of datasets and several types of analyses. This chapter describes that software and provides an overview ofFDA in different contexts. The second chapter introduces a framework for curve alignment, or registration, of exponential family functional data. The approach distinguishes itself from previous registration methods in its ability to handle dense binary observations with computational efficiency. Motivation comes from the Baltimore Longitudinal Study on Aging, in which accelerometer data provides valuable insights into the timing of sedentary behavior. The third chapter takes lessons learned about curve registration from the second chapter and use them to develop methods in an entirely new context: large multisite brain imaging studies. Scanner effects in multisite imaging studies are non-biological variability due to technical differences across sites and scanner hardware. This method identifies and removes scanner effects by registering cumulative distribution functions of image intensities values. In the final chapter the focus shifts from curve registration to regression. Described within this chapter is an entirely new nonlinear regression framework that draws from both functional data analysis and systems of ordinary equations. This model is motivated by the neurobiology of skilled movement, and was developed to capture the relationship between neural activity and arm movement in mice
Evaluating the harmonisation potential of diverse cohort datasets
Data discovery, the ability to find datasets relevant to an analysis, increases scientific opportunity, improves rigour and accelerates activity. Rapid growth in the depth, breadth, quantity and availability of data provides unprecedented opportunities and challenges for data discovery. A potential tool for increasing the efficiency of data discovery, particularly across multiple datasets is data harmonisation.A set of 124 variables, identified as being of broad interest to neurodegeneration, were harmonised using the C-Surv data model. Harmonisation strategies used were simple calibration, algorithmic transformation and standardisation to the Z-distribution. Widely used data conventions, optimised for inclusiveness rather than aetiological precision, were used as harmonisation rules. The harmonisation scheme was applied to data from four diverse population cohorts.Of the 120 variables that were found in the datasets, correspondence between the harmonised data schema and cohort-specific data models was complete or close for 111 (93%). For the remainder, harmonisation was possible with a marginal a loss of granularity.Although harmonisation is not an exact science, sufficient comparability across datasets was achieved to enable data discovery with relatively little loss of informativeness. This provides a basis for further work extending harmonisation to a larger variable list, applying the harmonisation to further datasets, and incentivising the development of data discovery tools
- …