3,509 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Incorporating Relaxivities to More Accurately Reconstruct MR Images

    Get PDF
    Purpose To develop a mathematical model that incorporates the magnetic resonance relaxivities into the image reconstruction process in a single step. Materials and methods In magnetic resonance imaging, the complex-valued measurements of the acquired signal at each point in frequency space are expressed as a Fourier transformation of the proton spin density weighted by Fourier encoding anomalies: T2⁎, T1, and a phase determined by magnetic field inhomogeneity (∆B) according to the MR signal equation. Such anomalies alter the expected symmetry and the signal strength of the k-space observations, resulting in images distorted by image warping, blurring, and loss in image intensity. Although T1 on tissue relaxation time provides valuable quantitative information on tissue characteristics, the T1 recovery term is typically neglected by assuming a long repetition time. In this study, the linear framework presented in the work of Rowe et al., 2007, and of Nencka et al., 2009 is extended to develop a Fourier reconstruction operation in terms of a real-valued isomorphism that incorporates the effects of T2⁎, ∆B, and T1. This framework provides a way to precisely quantify the statistical properties of the corrected image-space data by offering a linear relationship between the observed frequency space measurements and reconstructed corrected image-space measurements. The model is illustrated both on theoretical data generated by considering T2⁎, T1, and/or ∆B effects, and on experimentally acquired fMRI data by focusing on the incorporation of T1. A comparison is also made between the activation statistics computed from the reconstructed data with and without the incorporation of T1 effects. Result Accounting for T1 effects in image reconstruction is shown to recover image contrast that exists prior to T1 equilibrium. The incorporation of T1 is also shown to induce negligible correlation in reconstructed images and preserve functional activations. Conclusion With the use of the proposed method, the effects of T2⁎ and ∆B can be corrected, and T1 can be incorporated into the time series image-space data during image reconstruction in a single step. Incorporation of T1 provides improved tissue segmentation over the course of time series and therefore can improve the precision of motion correction and image registration

    Symmetric diffeomorphic modeling of longtudinal structural MRI

    Get PDF
    This technology report describes the longitudinal registration approach that we intend to incorporate into SPM12. It essentially describes a group-wise intra-subject modeling framework, which combines diffeomorphic and rigid-body registration, incorporating a correction for the intensity inhomogeneity artifact usually seen in MRI data. Emphasis is placed on achieving internal consistency and accounting for many of the mathematical subtleties that most implementations overlook. The implementation was evaluated using examples from the OASIS Longitudinal MRI Data in Non-demented and Demented Older Adults

    Assessment and optimisation of MRI measures of atrophy as potential markers of disease progression in multiple sclerosis

    Get PDF
    There is a need for sensitive measures of disease progression in multiple sclerosis (MS) to monitor treatment effects and understand disease evolution. MRI measures of brain atrophy have been proposed for this purpose. This thesis investigates a number of measurement techniques to assess their relative ability to monitor disease progression in clinically isolated syndromes (CIS) and early relapsing remitting MS (RRMS). Presented, is work demonstrating that measurement techniques and MR acquisitions can be optimised to give small but significant improvements in measurement sensitivity and precision, which provided greater statistical power. Direct comparison of numerous techniques demonstrated significant differences between them. Atrophy measurements from SIENA and the BBSI (registration-based techniques) were significantly more precise than segmentation and subtraction of brain volumes, although larger percentage losses were observed in grey matter fraction. Ventricular enlargement (VE) gave similar statistical power and these techniques were robust and reliable; scan-rescan measurement error was <0.01% of brain volume for BBSI and SIENA and <0.04ml for VE. Annual atrophy rates (using SIENA) were -0.78% in RRMS and -0.52% in CIS patients who progressed to MS, which were significantly greater than the rate observed in controls (-0.07%). Sample size calculations for future trials of disease-modifying treatments in RRMS, using brain atrophy as an outcome measure, are described. For SIENA, the BBSI and VE respectively, an estimated 123, 157 and 140 patients per treatment arm respectively would be required to show a 30% slowing of atrophy rate over two years. In CIS subjects brain atrophy rate was a significant prognostic factor, independent of T2 MRI lesions at baseline, for development of MS by five year follow-up. It was also the most significant MR predictor of disability in RRMS subjects. Cognitive assessment of RRMS patients at five year follow-up is described, and brain atrophy rate was a significant predictor of overall cognitive performance, and more specifically, of performance in tests of memory. The work in this thesis has identified methods for sensitively measuring progressive brain atrophy in MS. It has shown that brain atrophy changes in early MS are related to early clinical evolution, providing complementary information to clinical assessment that could be utilised to monitor disease progression

    Generalised boundary shift integral for longitudinal assessment of spinal cord atrophy

    Get PDF
    Spinal cord atrophy measurements obtained from structural magnetic resonance imaging (MRI) are associated with disability in many neurological diseases and serve as in vivo biomarkers of neurodegeneration. Longitudinal spinal cord atrophy rate is commonly determined from the numerical difference between two volumes (based on 3D surface fitting) or two cross-sectional areas (CSA, based on 2D edge detection) obtained at different time-points. Being an indirect measure, atrophy rates are susceptible to variable segmentation errors at the edge of the spinal cord. To overcome those limitations, we developed a new registration-based pipeline that measures atrophy rates directly. We based our approach on the generalised boundary shift integral (GBSI) method, which registers 2 scans and uses a probabilistic XOR mask over the edge of the spinal cord, thereby measuring atrophy more accurately than segmentation-based techniques. Using a large cohort of longitudinal spinal cord images (610 subjects with multiple sclerosis from a multi-centre trial and 52 healthy controls), we demonstrated that GBSI is a sensitive, quantitative and objective measure of longitudinal spinal cord volume change. The GBSI pipeline is repeatable, reproducible, and provides more precise measurements of longitudinal spinal cord atrophy than segmentation-based methods in longitudinal spinal cord atrophy studies

    Measuring brain atrophy with a generalized formulation of the boundary shift integral

    Get PDF
    AbstractBrain atrophy measured using structural magnetic resonance imaging (MRI) has been widely used as an imaging biomarker for disease diagnosis and tracking of pathologic progression in neurodegenerative diseases. In this work, we present a generalized and extended formulation of the boundary shift integral (gBSI) using probabilistic segmentations to estimate anatomic changes between 2 time points. This method adaptively estimates a non-binary exclusive OR region of interest from probabilistic brain segmentations of the baseline and repeat scans to better localize and capture the brain atrophy. We evaluate the proposed method by comparing the sample size requirements for a hypothetical clinical trial of Alzheimer's disease to that needed for the current implementation of BSI as well as a fuzzy implementation of BSI. The gBSI method results in a modest but reduced sample size, providing increased sensitivity to disease changes through the use of the probabilistic exclusive OR region

    Inhomogeneity Correction in High Field Magnetic Resonance Images

    Get PDF
    Projecte realitzat en col.laboració amb el centre Swiss Federal Institute of Technology (EPFL)Magnetic Resonance Imaging, MRI, is one of the most powerful and harmless ways to study human inner tissues. It gives the chance of having an accurate insight into the physiological condition of the human body, and specially, the brain. Following this aim, in the last decade MRI has moved to ever higher magnetic field strength that allow us to get advantage of a better signal-to-noise ratio. This improvement of the SNR, which increases almost linearly with the field strength, has several advantages: higher spatial resolution and/or faster imaging, greater spectral dispersion, as well as an enhanced sensitivity to magnetic susceptibility. However, at high magnetic resonance imaging, the interactions between the RF pulse and the high permittivity samples, which causes the so called Intensity Inhomogeneity or B1 inhomogeneity, can no longer be negligible. This inhomogeneity causes undesired efects that afects quantitatively image analysis and avoid the application classical intensity-based segmentation and other medical functions. In this Master thesis, a new method for Intensity Inhomogeneity correction at high ¯eld is presented. At high ¯eld is not possible to achieve the estimation and the correction directly from the corrupted data. Thus, this method attempt the correction by acquiring extra information during the image process, the RF map. The method estimates the inhomogeneity by the comparison of both acquisitions. The results are compared to other methods, the PABIC and the Low-Pass Filter which try to correct the inhomogeneity directly from the corrupted data
    corecore