2,075 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Manual-protocol inspired technique for improving automated MR image segmentation during label fusion

    Get PDF
    Recent advances in multi-atlas based algorithms address many of the previous limitations in model-based and probabilistic segmentation methods. However, at the label fusion stage, a majority of algorithms focus primarily on optimizing weight-maps associated with the atlas library based on a theoretical objective function that approximates the segmentation error. In contrast, we propose a novel method-Autocorrecting Walks over Localized Markov Random Fields (AWoL-MRF)-that aims at mimicking the sequential process of manual segmentation, which is the gold-standard for virtually all the segmentation methods. AWoL-MRF begins with a set of candidate labels generated by a multi-atlas segmentation pipeline as an initial label distribution and refines low confidence regions based on a localized Markov random field (L-MRF) model using a novel sequential inference process (walks). We show that AWoL-MRF produces state-of-the-art results with superior accuracy and robustness with a small atlas library compared to existing methods. We validate the proposed approach by performing hippocampal segmentations on three independent datasets: (1) Alzheimer\u27s Disease Neuroimaging Database (ADNI); (2) First Episode Psychosis patient cohort; and (3) A cohort of preterm neonates scanned early in life and at term-equivalent age. We assess the improvement in the performance qualitatively as well as quantitatively by comparing AWoL-MRF with majority vote, STAPLE, and Joint Label Fusion methods. AWoL-MRF reaches a maximum accuracy of 0.881 (dataset 1), 0.897 (dataset 2), and 0.807 (dataset 3) based on Dice similarity coefficient metric, offering significant performance improvements with a smaller atlas library (\u3c 10) over compared methods. We also evaluate the diagnostic utility of AWoL-MRF by analyzing the volume differences per disease category in the ADNI1: Complete Screening dataset. We have made the source code for AWoL-MRF public at: https://github.com/CobraLab/AWoL-MRF

    Computational Modeling for Abnormal Brain Tissue Segmentation, Brain Tumor Tracking, and Grading

    Get PDF
    This dissertation proposes novel texture feature-based computational models for quantitative analysis of abnormal tissues in two neurological disorders: brain tumor and stroke. Brain tumors are the cells with uncontrolled growth in the brain tissues and one of the major causes of death due to cancer. On the other hand, brain strokes occur due to the sudden interruption of the blood supply which damages the normal brain tissues and frequently causes death or persistent disability. Clinical management of these brain tumors and stroke lesions critically depends on robust quantitative analysis using different imaging modalities including Magnetic Resonance (MR) and Digital Pathology (DP) images. Due to uncontrolled growth and infiltration into the surrounding tissues, the tumor regions appear with a significant texture variation in the static MRI volume and also in the longitudinal imaging study. Consequently, this study developed computational models using novel texture features to segment abnormal brain tissues (tumor, and stroke lesions), tracking the change of tumor volume in longitudinal images, and tumor grading in MR images. Manual delineation and analysis of these abnormal tissues in large scale is tedious, error-prone, and often suffers from inter-observer variability. Therefore, efficient computational models for robust segmentation of different abnormal tissues is required to support the diagnosis and analysis processes. In this study, brain tissues are characterized with novel computational modeling of multi-fractal texture features for multi-class brain tumor tissue segmentation (BTS) and extend the method for ischemic stroke lesions in MRI. The robustness of the proposed segmentation methods is evaluated using a huge amount of private and public domain clinical data that offers competitive performance when compared with that of the state-of-the-art methods. Further, I analyze the dynamic texture behavior of tumor volume in longitudinal imaging and develop post-processing frame-work using three-dimensional (3D) texture features. These post-processing methods are shown to reduce the false positives in the BTS results and improve the overall segmentation result in longitudinal imaging. Furthermore, using this improved segmentation results the change of tumor volume has been quantified in three types such as stable, progress, and shrinkage as observed by the volumetric changes of different tumor tissues in longitudinal images. This study also investigates a novel non-invasive glioma grading, for the first time in literature, that uses structural MRI only. Such non-invasive glioma grading may be useful before an invasive biopsy is recommended. This study further developed an automatic glioma grading scheme using the invasive cell nuclei morphology in DP images for cross-validation with the same patients. In summary, the texture-based computational models proposed in this study are expected to facilitate the clinical management of patients with the brain tumors and strokes by automating large scale imaging data analysis, reducing human error, inter-observer variability, and producing repeatable brain tumor quantitation and grading

    CerebNet: A fast and reliable deep-learning pipeline for detailed cerebellum sub-segmentation

    Get PDF
    Quantifying the volume of the cerebellum and its lobes is of profound interest in various neurodegenerative and acquired diseases. Especially for the most common spinocerebellar ataxias (SCA), for which the first antisense oligonculeotide-base gene silencing trial has recently started, there is an urgent need for quantitative, sensitive imaging markers at pre-symptomatic stages for stratification and treatment assessment. This work introduces CerebNet, a fully automated, extensively validated, deep learning method for the lobular segmentation of the cerebellum, including the separation of gray and white matter. For training, validation, and testing, T1-weighted images from 30 participants were manually annotated into cerebellar lobules and vermal sub-segments, as well as cerebellar white matter. CerebNet combines FastSurferCNN, a UNet-based 2.5D segmentation network, with extensive data augmentation, e.g. realistic non-linear deformations to increase the anatomical variety, eliminating additional preprocessing steps, such as spatial normalization or bias field correction. CerebNet demonstrates a high accuracy (on average 0.87 Dice and 1.742mm Robust Hausdorff Distance across all structures) outperforming state-of-the-art approaches. Furthermore, it shows high test-retest reliability (average ICC >0.97 on OASIS and Kirby) as well as high sensitivity to disease effects, including the pre-ataxic stage of spinocerebellar ataxia type 3 (SCA3). CerebNet is compatible with FreeSurfer and FastSurfer and can analyze a 3D volume within seconds on a consumer GPU in an end-to-end fashion, thus providing an efficient and validated solution for assessing cerebellum sub-structure volumes. We make CerebNet available as source-code (https://github.com/Deep-MI/FastSurfer)

    Cross-scanner and cross-protocol multi-shell diffusion MRI data harmonization: algorithms and result

    Get PDF
    Cross-scanner and cross-protocol variability of diffusion magnetic resonance imaging (dMRI) data are known to be major obstacles in multi-site clinical studies since they limit the ability to aggregate dMRI data and derived measures. Computational algorithms that harmonize the data and minimize such variability are critical to reliably combine datasets acquired from different scanners and/or protocols, thus improving the statistical power and sensitivity of multi-site studies. Different computational approaches have been proposed to harmonize diffusion MRI data or remove scanner-specific differences. To date, these methods have mostly been developed for or evaluated on single b-value diffusion MRI data. In this work, we present the evaluation results of 19 algorithms that are developed to harmonize the cross-scanner and cross-protocol variability of multi-shell diffusion MRI using a benchmark database. The proposed algorithms rely on various signal representation approaches and computational tools, such as rotational invariant spherical harmonics, deep neural networks and hybrid biophysical and statistical approaches. The benchmark database consists of data acquired from the same subjects on two scanners with different maximum gradient strength (80 and 300 ​mT/m) and with two protocols. We evaluated the performance of these algorithms for mapping multi-shell diffusion MRI data across scanners and across protocols using several state-of-the-art imaging measures. The results show that data harmonization algorithms can reduce the cross-scanner and cross-protocol variabilities to a similar level as scan-rescan variability using the same scanner and protocol. In particular, the LinearRISH algorithm based on adaptive linear mapping of rotational invariant spherical harmonics features yields the lowest variability for our data in predicting the fractional anisotropy (FA), mean diffusivity (MD), mean kurtosis (MK) and the rotationally invariant spherical harmonic (RISH) features. But other algorithms, such as DIAMOND, SHResNet, DIQT, CMResNet show further improvement in harmonizing the return-to-origin probability (RTOP). The performance of different approaches provides useful guidelines on data harmonization in future multi-site studies
    corecore