103 research outputs found

    SC-VAE: Sparse Coding-based Variational Autoencoder

    Full text link
    Learning rich data representations from unlabeled data is a key challenge towards applying deep learning algorithms in downstream supervised tasks. Several variants of variational autoencoders have been proposed to learn compact data representaitons by encoding high-dimensional data in a lower dimensional space. Two main classes of VAEs methods may be distinguished depending on the characteristics of the meta-priors that are enforced in the representation learning step. The first class of methods derives a continuous encoding by assuming a static prior distribution in the latent space. The second class of methods learns instead a discrete latent representation using vector quantization (VQ) along with a codebook. However, both classes of methods suffer from certain challenges, which may lead to suboptimal image reconstruction results. The first class of methods suffers from posterior collapse, whereas the second class of methods suffers from codebook collapse. To address these challenges, we introduce a new VAE variant, termed SC-VAE (sparse coding-based VAE), which integrates sparse coding within variational autoencoder framework. Instead of learning a continuous or discrete latent representation, the proposed method learns a sparse data representation that consists of a linear combination of a small number of learned atoms. The sparse coding problem is solved using a learnable version of the iterative shrinkage thresholding algorithm (ISTA). Experiments on two image datasets demonstrate that our model can achieve improved image reconstruction results compared to state-of-the-art methods. Moreover, the use of learned sparse code vectors allows us to perform downstream task like coarse image segmentation through clustering image patches.Comment: 15 pages, 11 figures, and 3 table

    Normative Modeling using Multimodal Variational Autoencoders to Identify Abnormal Brain Structural Patterns in Alzheimer Disease

    Full text link
    Normative modelling is an emerging method for understanding the underlying heterogeneity within brain disorders like Alzheimer Disease (AD) by quantifying how each patient deviates from the expected normative pattern that has been learned from a healthy control distribution. Since AD is a multifactorial disease with more than one biological pathways, multimodal magnetic resonance imaging (MRI) neuroimaging data can provide complementary information about the disease heterogeneity. However, existing deep learning based normative models on multimodal MRI data use unimodal autoencoders with a single encoder and decoder that may fail to capture the relationship between brain measurements extracted from different MRI modalities. In this work, we propose multi-modal variational autoencoder (mmVAE) based normative modelling framework that can capture the joint distribution between different modalities to identify abnormal brain structural patterns in AD. Our multi-modal framework takes as input Freesurfer processed brain region volumes from T1-weighted (cortical and subcortical) and T2-weighed (hippocampal) scans of cognitively normal participants to learn the morphological characteristics of the healthy brain. The estimated normative model is then applied on Alzheimer Disease (AD) patients to quantify the deviation in brain volumes and identify the abnormal brain structural patterns due to the effect of the different AD stages. Our experimental results show that modeling joint distribution between the multiple MRI modalities generates deviation maps that are more sensitive to disease staging within AD, have a better correlation with patient cognition and result in higher number of brain regions with statistically significant deviations compared to a unimodal baseline model with all modalities concatenated as a single input.Comment: Medical Imaging Meets NeurIPS workshop in NeurIPS 202

    Development of white matter fiber covariance networks supports executive function in youth

    Get PDF
    During adolescence, the brain undergoes extensive changes in white matter structure that support cognition. Data-driven approaches applied to cortical surface properties have led the field to understand brain development as a spatially and temporally coordinated mechanism that follows hierarchically organized gradients of change. Although white matter development also appears asynchronous, previous studies have relied largely on anatomical tract-based atlases, precluding a direct assessment of how white matter structure is spatially and temporally coordinated. Harnessing advances in diffusion modeling and machine learning, we identified 14 data-driven patterns of covarying white matter structure in a large sample of youth. Fiber covariance networks aligned with known major tracts, while also capturing distinct patterns of spatial covariance across distributed white matter locations. Most networks showed age-related increases in fiber network properties, which were also related to developmental changes in executive function. This study delineates data-driven patterns of white matter development that support cognition

    Dynamic U-Net: Adaptively Calibrate Features for Abdominal Multi-organ Segmentation

    Full text link
    U-Net has been widely used for segmenting abdominal organs, achieving promising performance. However, when it is used for multi-organ segmentation, first, it may be limited in exploiting global long-range contextual information due to the implementation of standard convolutions. Second, the use of spatial-wise downsampling (e.g., max pooling or strided convolutions) in the encoding path may lead to the loss of deformable or discriminative details. Third, features upsampled from the higher level are concatenated with those that persevered via skip connections. However, repeated downsampling and upsampling operations lead to misalignments between them and their concatenation degrades segmentation performance. To address these limitations, we propose Dynamically Calibrated Convolution (DCC), Dynamically Calibrated Downsampling (DCD), and Dynamically Calibrated Upsampling (DCU) modules, respectively. The DCC module can utilize global inter-dependencies between spatial and channel features to calibrate these features adaptively. The DCD module enables networks to adaptively preserve deformable or discriminative features during downsampling. The DCU module can dynamically align and calibrate upsampled features to eliminate misalignments before concatenations. We integrated the proposed modules into a standard U-Net, resulting in a new architecture, termed Dynamic U-Net. This architectural design enables U-Net to dynamically adjust features for different organs. We evaluated Dynamic U-Net in two abdominal multi-organ segmentation benchmarks. Dynamic U-Net achieved statistically improved segmentation accuracy compared with standard U-Net. Our code is available at https://github.com/sotiraslab/DynamicUNet.Comment: 11 pages, 3 figures, 2 table

    The University of Pennsylvania Glioblastoma (UPenn-GBM) cohort: Advanced MRI, clinical, genomics, & radiomics

    Get PDF
    Glioblastoma is the most common aggressive adult brain tumor. Numerous studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of: a) number of subjects, b) lack of consistent acquisition protocol, c) data quality, or d) accompanying clinical, demographic, and molecular information. Toward alleviating these limitations, we contribute the University of Pennsylvania Glioblastoma Imaging, Genomics, and Radiomics (UPenn-GBM) dataset, which describes the currently largest publicly available comprehensive collection of 630 patients diagnosed with de novo glioblastoma. The UPenn-GBM dataset includes (a) advanced multi-parametric magnetic resonance imaging scans acquired during routine clinical practice, at the University of Pennsylvania Health System, (b) accompanying clinical, demographic, and molecular information, (d) perfusion and diffusion derivative volumes, (e) computationally-derived and manually-revised expert annotations of tumor sub-regions, as well as (f) quantitative imaging (also known as radiomic) features corresponding to each of these regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments

    Deformable Medical Image Registration: A Survey

    Get PDF
    Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this technical report, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this technical report is to provide an extensive account of registration techniques in a systematic manner.Le recalage déformable d'images est une des tâches les plus fondamentales dans l'imagerie médicale. Parmi ses applications les plus importantes, on compte: i) la fusion d' information provenant des différents types de modalités a n de faciliter le diagnostic et la planification du traitement; ii) les études longitudinales, oú des changements structurels ou anatomiques sont étudiées en fonction du temps; et iii) la modélisation de la variabilité anatomique normale d'une population et les atlas statistiques. Dans ce rapport de recherche, nous essayons de donner un aperçu des différentes méthodes du recalage déformables, en mettant l'accent sur les avancées les plus récentes du domaine. Nous avons particulièrement insisté sur les techniques appliquées aux images médicales. A n d'étudier les méthodes du recalage d'images, leurs composants principales sont d'abord identifiés puis étudiées de manière indépendante, les techniques les plus récentes étant classifiées en suivant un schéma logique déterminé. La contribution de ce rapport de recherche est de fournir un compte rendu détaillé des techniques de recalage d'une manière systématique

    Psychosis brain subtypes validated in first-episode cohorts and related to illness remission: Results from the PHENOM consortium

    Get PDF
    Using machine learning, we recently decomposed the neuroanatomical heterogeneity of established schizophrenia to discover two volumetric subgroups-a \u27lower brain volume\u27 subgroup (SG1) and an \u27higher striatal volume\u27 subgroup (SG2) with otherwise normal brain structure. In this study, we investigated whether the MRI signatures of these subgroups were also already present at the time of the first-episode of psychosis (FEP) and whether they were related to clinical presentation and clinical remission over 1-, 3-, and 5-years. We included 572 FEP and 424 healthy controls (HC) from 4 sites (Sao Paulo, Santander, London, Melbourne) of the PHENOM consortium. Our prior MRI subgrouping models (671 participants; USA, Germany, and China) were applied to both FEP and HC. Participants were assigned into 1 of 4 categories: subgroup 1 (SG1), subgroup 2 (SG2), no subgroup membership (\u27None\u27), and mixed SG1 + SG2 subgroups (\u27Mixed\u27). Voxel-wise analyses characterized SG1 and SG2 subgroups. Supervised machine learning analyses characterized baseline and remission signatures related to SG1 and SG2 membership. The two dominant patterns of \u27lower brain volume\u27 in SG1 and \u27higher striatal volume\u27 (with otherwise normal neuromorphology) in SG2 were identified already at the first episode of psychosis. SG1 had a significantly higher proportion of FEP (32%) vs. HC (19%) than SG2 (FEP, 21%; HC, 23%). Clinical multivariate signatures separated the SG1 and SG2 subgroups (balanced accuracy = 64%; p \u3c 0.0001), with SG2 showing higher education but also greater positive psychosis symptoms at first presentation, and an association with symptom remission at 1-year, 5-year, and when timepoints were combined. Neuromorphological subtypes of schizophrenia are already evident at illness onset, separated by distinct clinical presentations, and differentially associated with subsequent remission. These results suggest that the subgroups may be underlying risk phenotypes that could be targeted in future treatment trials and are critical to consider when interpreting neuroimaging literature

    MRF-based Diffeomorphic Population Deformable Registration & Segmentation

    Get PDF
    In this report, we present a novel framework to deform mutually a population of n-examples based on an optimality criterion. The optimality criterion comprises three terms, one that aims to impose local smoothness, a second that aims to minimize the individual distances between all possible pairs of images, while the last one is a global statistical measurement based on "compactness" criteria. The problem is reformulated using a discrete MRF, where the above constraints are encoded in singleton (global) and pair-wise potentials (smoothness (intra-layer costs) and pair-alignments (inter-layer costs)). Furthermore, we propose a novel grid-based deformation scheme, that guarantees the diffeomorphism of the deformation while being computationally favorable compared to standard deformation methods. Towards addressing important deformations we propose a compositional approach where the deformations are recovered through the sub-optimal solutions of successive discrete MRFs. The resulting paradigm is optimized using efficient linear programming. The proposed framework for the mutual deformation of the images is applied to the group-wise registration problem as well as to an atlas-based population segmentation problem. Both articially generated data with known deformations and real data of medical studies were used for the validation of the method. Promising results demonstrate the potential of our method

    MRI-based classification of IDH mutation and 1p/19q codeletion status of gliomas using a 2.5D hybrid multi-task convolutional neural network

    Get PDF
    Isocitrate dehydrogenase (IDH) mutation and 1p/19q codeletion status are important prognostic markers for glioma. Currently, they are determined using invasive procedures. Our goal was to develop artificial intelligence-based methods to non-invasively determine these molecular alterations from MRI. For this purpose, pre-operative MRI scans of 2648 patients with gliomas (grade II-IV) were collected from Washington University School of Medicine (WUSM; n = 835) and publicly available datasets viz. Brain Tumor Segmentation (BraTS; n = 378), LGG 1p/19q (n = 159), Ivy Glioblastoma Atlas Project (Ivy GAP; n = 41), The Cancer Genome Atlas (TCGA; n = 461), and the Erasmus Glioma Database (EGD; n = 774). A 2.5D hybrid convolutional neural network was proposed to simultaneously localize the tumor and classify its molecular status by leveraging imaging features from MR scans and prior knowledge features from clinical records and tumor location. The models were tested on one internal (TCGA) and two external (WUSM and EGD) test sets. For IDH, the best-performing model achieved areas under the receiver operating characteristic (AUROC) of 0.925, 0.874, 0.933 and areas under the precision-recall curves (AUPRC) of 0.899, 0.702, 0.853 on the internal, WUSM, and EGD test sets, respectively. For 1p/19q, the best model achieved AUROCs of 0.782, 0.754, 0.842, and AUPRCs of 0.588, 0.713, 0.782, on those three data-splits, respectively. The high accuracy of the model on unseen data showcases its generalization capabilities and suggests its potential to perform a 'virtual biopsy' for tailoring treatment planning and overall clinical management of gliomas
    • …
    corecore