304 research outputs found

    Hippocampal subfields predict positive symptoms in schizophrenia: First evidence from brain morphometry

    Get PDF
    Alterations of hippocampal anatomy have been reported consistently in schizophrenia. Within the present study, we used FreeSurfer to determine hippocampal subfield volumes in 21 schizophrenic patients. A negative correlation between PANSS-positive symptom score and bilateral hippocampal subfield CA2/3 as well as CA1 volume was found on high-resolution magnetic resonance images. Our observation opens the gate for advanced investigation of the commonly reported hippocampal abnormalities in schizophrenia in terms of specific subfields

    Registration of 3D Fetal Brain US and MRI

    Full text link
    We propose a novel method for registration of 3D fetal brain ultrasound and a reconstructed magnetic resonance fetal brain volumes. The reconstructed MR volume is first segmented using a probabilistic atlas and an ultrasound-like image volume is simulated from the segmentation of the MR image. This ultrasound-like image volume is then affinely aligned with real ultrasound volumes of 27 fetal brains using a robust block-matching approach which can deal with intensity artefacts and missing features in ultrasound images. We show that this approach results in good overlap of four small structures. The average of the co-aligned US images shows good correlation with anatomy of the fetal brain as seen in the MR reconstruction

    A Longitudinal Method for Simultaneous Whole-Brain and Lesion Segmentation in Multiple Sclerosis

    Full text link
    In this paper we propose a novel method for the segmentation of longitudinal brain MRI scans of patients suffering from Multiple Sclerosis. The method builds upon an existing cross-sectional method for simultaneous whole-brain and lesion segmentation, introducing subject-specific latent variables to encourage temporal consistency between longitudinal scans. It is very generally applicable, as it does not make any prior assumptions on the scanner, the MRI protocol, or the number and timing of longitudinal follow-up scans. Preliminary experiments on three longitudinal datasets indicate that the proposed method produces more reliable segmentations and detects disease effects better than the cross-sectional method it is based upon

    Systematic comparison of different techniques to measure hippocampal subfield volumes in ADNI2

    Get PDF
    OBJECTIVE: Subfield-specific measurements provide superior information in the early stages of neurodegenerative diseases compared to global hippocampal measurements. The overall goal was to systematically compare the performance of five representative manual and automated T1 and T2 based subfield labeling techniques in a sub-set of the ADNI2 population. METHODS: The high resolution T2 weighted hippocampal images (T2-HighRes) and the corresponding T1 images from 106 ADNI2 subjects (41 controls, 57 MCI, 8 AD) were processed as follows. A. T1-based: 1. Freesurfer + Large-Diffeomorphic-Metric-Mapping in combination with shape analysis. 2. FreeSurfer 5.1 subfields using in-vivo atlas. B. T2-HighRes: 1. Model-based subfield segmentation using ex-vivo atlas (FreeSurfer 6.0). 2. T2-based automated multi-atlas segmentation combined with similarity-weighted voting (ASHS). 3. Manual subfield parcellation. Multiple regression analyses were used to calculate effect sizes (ES) for group, amyloid positivity in controls, and associations with cognitive/memory performance for each approach. RESULTS: Subfield volumetry was better than whole hippocampal volumetry for the detection of the mild atrophy differences between controls and MCI (ES: 0.27 vs 0.11). T2-HighRes approaches outperformed T1 approaches for the detection of early stage atrophy (ES: 0.27 vs.0.10), amyloid positivity (ES: 0.11 vs 0.04), and cognitive associations (ES: 0.22 vs 0.19). CONCLUSIONS: T2-HighRes subfield approaches outperformed whole hippocampus and T1 subfield approaches. None of the different T2-HghRes methods tested had a clear advantage over the other methods. Each has strengths and weaknesses that need to be taken into account when deciding which one to use to get the best results from subfield volumetry

    Improved inter-scanner MS lesion segmentation by adversarial training on longitudinal data

    Full text link
    The evaluation of white matter lesion progression is an important biomarker in the follow-up of MS patients and plays a crucial role when deciding the course of treatment. Current automated lesion segmentation algorithms are susceptible to variability in image characteristics related to MRI scanner or protocol differences. We propose a model that improves the consistency of MS lesion segmentations in inter-scanner studies. First, we train a CNN base model to approximate the performance of icobrain, an FDA-approved clinically available lesion segmentation software. A discriminator model is then trained to predict if two lesion segmentations are based on scans acquired using the same scanner type or not, achieving a 78% accuracy in this task. Finally, the base model and the discriminator are trained adversarially on multi-scanner longitudinal data to improve the inter-scanner consistency of the base model. The performance of the models is evaluated on an unseen dataset containing manual delineations. The inter-scanner variability is evaluated on test-retest data, where the adversarial network produces improved results over the base model and the FDA-approved solution.Comment: MICCAI BrainLes 2019 Worksho

    Partial Volume Segmentation of Brain MRI Scans of any Resolution and Contrast

    Full text link
    Partial voluming (PV) is arguably the last crucial unsolved problem in Bayesian segmentation of brain MRI with probabilistic atlases. PV occurs when voxels contain multiple tissue classes, giving rise to image intensities that may not be representative of any one of the underlying classes. PV is particularly problematic for segmentation when there is a large resolution gap between the atlas and the test scan, e.g., when segmenting clinical scans with thick slices, or when using a high-resolution atlas. In this work, we present PV-SynthSeg, a convolutional neural network (CNN) that tackles this problem by directly learning a mapping between (possibly multi-modal) low resolution (LR) scans and underlying high resolution (HR) segmentations. PV-SynthSeg simulates LR images from HR label maps with a generative model of PV, and can be trained to segment scans of any desired target contrast and resolution, even for previously unseen modalities where neither images nor segmentations are available at training. PV-SynthSeg does not require any preprocessing, and runs in seconds. We demonstrate the accuracy and flexibility of the method with extensive experiments on three datasets and 2,680 scans. The code is available at https://github.com/BBillot/SynthSeg.Comment: accepted for MICCAI 202

    Validating module network learning algorithms using simulated data

    Get PDF
    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators.Comment: 13 pages, 6 figures + 2 pages, 2 figures supplementary informatio

    A semi-supervised large margin algorithm for white matter hyperintensity segmentation

    No full text
    Precise detection and quantification of white matter hyperintensities (WMH) is of great interest in studies of neurodegenerative diseases (NDs). In this work, we propose a novel semi-supervised large margin algorithm for the segmentation of WMH. The proposed algorithm optimizes a kernel based max-margin objective function which aims to maximize the margin averaged over inliers and outliers while exploiting a limited amount of available labelled data. We show that the learning problem can be formulated as a joint framework learning a classifier and a label assignment simultaneously, which can be solved efficiently by an iterative algorithm. We evaluate our method on a database of 280 brain Magnetic Resonance (MR) images from subjects that either suffered from subjective memory complaints or were diagnosed with NDs. The segmented WMH volumes correlate well with the standard clinical measurement (Fazekas score), and both the qualitative visualization results and quantitative correlation scores of the proposed algorithm outperform other well known methods for WMH segmentation
    • 

    corecore