9,489 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Get PDF
    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well

    Comparison of manual and semi-automated delineation of regions of interest for radioligand PET imaging analysis

    Get PDF
    BACKGROUND As imaging centers produce higher resolution research scans, the number of man-hours required to process regional data has become a major concern. Comparison of automated vs. manual methodology has not been reported for functional imaging. We explored validation of using automation to delineate regions of interest on positron emission tomography (PET) scans. The purpose of this study was to ascertain improvements in image processing time and reproducibility of a semi-automated brain region extraction (SABRE) method over manual delineation of regions of interest (ROIs). METHODS We compared 2 sets of partial volume corrected serotonin 1a receptor binding potentials (BPs) resulting from manual vs. semi-automated methods. BPs were obtained from subjects meeting consensus criteria for frontotemporal degeneration and from age- and gender-matched healthy controls. Two trained raters provided each set of data to conduct comparisons of inter-rater mean image processing time, rank order of BPs for 9 PET scans, intra- and inter-rater intraclass correlation coefficients (ICC), repeatability coefficients (RC), percentages of the average parameter value (RM%), and effect sizes of either method. RESULTS SABRE saved approximately 3 hours of processing time per PET subject over manual delineation (p 0.8) for both methods. RC and RM% were lower for the manual method across all ROIs, indicating less intra-rater variance across PET subjects' BPs. CONCLUSION SABRE demonstrated significant time savings and no significant difference in reproducibility over manual methods, justifying the use of SABRE in serotonin 1a receptor radioligand PET imaging analysis. This implies that semi-automated ROI delineation is a valid methodology for future PET imaging analysis

    Longitudinal measurement of the developing grey matter in preterm subjects using multi-modal MRI.

    Get PDF
    Preterm birth is a major public health concern, with the severity and occurrence of adverse outcome increasing with earlier delivery. Being born preterm disrupts a time of rapid brain development: in addition to volumetric growth, the cortex folds, myelination is occurring and there are changes on the cellular level. These neurological events have been imaged non-invasively using diffusion-weighted (DW) MRI. In this population, there has been a focus on examining diffusion in the white matter, but the grey matter is also critically important for neurological health. We acquired multi-shell high-resolution diffusion data on 12 infants born at ≤28weeks of gestational age at two time-points: once when stable after birth, and again at term-equivalent age. We used the Neurite Orientation Dispersion and Density Imaging model (NODDI) (Zhang et al., 2012) to analyse the changes in the cerebral cortex and the thalamus, both grey matter regions. We showed region-dependent changes in NODDI parameters over the preterm period, highlighting underlying changes specific to the microstructure. This work is the first time that NODDI parameters have been evaluated in both the cortical and the thalamic grey matter as a function of age in preterm infants, offering a unique insight into neuro-development in this at-risk population

    Deep learning-based fully automatic segmentation of wrist cartilage in MR images

    Full text link
    The study objective was to investigate the performance of a dedicated convolutional neural network (CNN) optimized for wrist cartilage segmentation from 2D MR images. CNN utilized a planar architecture and patch-based (PB) training approach that ensured optimal performance in the presence of a limited amount of training data. The CNN was trained and validated in twenty multi-slice MRI datasets acquired with two different coils in eleven subjects (healthy volunteers and patients). The validation included a comparison with the alternative state-of-the-art CNN methods for the segmentation of joints from MR images and the ground-truth manual segmentation. When trained on the limited training data, the CNN outperformed significantly image-based and patch-based U-Net networks. Our PB-CNN also demonstrated a good agreement with manual segmentation (Sorensen-Dice similarity coefficient (DSC) = 0.81) in the representative (central coronal) slices with large amount of cartilage tissue. Reduced performance of the network for slices with a very limited amount of cartilage tissue suggests the need for fully 3D convolutional networks to provide uniform performance across the joint. The study also assessed inter- and intra-observer variability of the manual wrist cartilage segmentation (DSC=0.78-0.88 and 0.9, respectively). The proposed deep-learning-based segmentation of the wrist cartilage from MRI could facilitate research of novel imaging markers of wrist osteoarthritis to characterize its progression and response to therapy

    Validation of vessel size imaging (VSI) in high-grade human gliomas using magnetic resonance imaging, image-guided biopsies, and quantitative immunohistochemistry.

    Get PDF
    To evaluate the association between a vessel size index (VSIMRI) derived from dynamic susceptibility contrast (DSC) perfusion imaging using a custom spin-and-gradient echo echoplanar imaging (SAGE-EPI) sequence and quantitative estimates of vessel morphometry based on immunohistochemistry from image-guided biopsy samples. The current study evaluated both relative cerebral blood volume (rCBV) and VSIMRI in eleven patients with high-grade glioma (7 WHO grade III and 4 WHO grade IV). Following 26 MRI-guided glioma biopsies in these 11 patients, we evaluated tissue morphometry, including vessel density and average radius, using an automated procedure based on the endothelial cell marker CD31 to highlight tumor vasculature. Measures of rCBV and VSIMRI were then compared to histological measures. We demonstrate good agreement between VSI measured by MRI and histology; VSIMRI = 13.67 μm and VSIHistology = 12.60 μm, with slight overestimation of VSIMRI in grade III patients compared to histology. rCBV showed a moderate but significant correlation with vessel density (r = 0.42, p = 0.03), and a correlation was also observed between VSIMRI and VSIHistology (r = 0.49, p = 0.01). The current study supports the hypothesis that vessel size measures using MRI accurately reflect vessel caliber within high-grade gliomas, while traditional measures of rCBV are correlated with vessel density and not vessel caliber

    PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation

    Full text link
    With the advent of convolutional neural networks~(CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1T_1-weighted and T2T_2-weighted contrasts with only T1T_1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results~(overall Dice overlap=0.94=0.94), with a fast run time~(\approx 45 seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev
    corecore