331 research outputs found

    Data augmentation in Rician noise model and Bayesian Diffusion Tensor Imaging

    Full text link
    Mapping white matter tracts is an essential step towards understanding brain function. Diffusion Magnetic Resonance Imaging (dMRI) is the only noninvasive technique which can detect in vivo anisotropies in the 3-dimensional diffusion of water molecules, which correspond to nervous fibers in the living brain. In this process, spectral data from the displacement distribution of water molecules is collected by a magnetic resonance scanner. From the statistical point of view, inverting the Fourier transform from such sparse and noisy spectral measurements leads to a non-linear regression problem. Diffusion tensor imaging (DTI) is the simplest modeling approach postulating a Gaussian displacement distribution at each volume element (voxel). Typically the inference is based on a linearized log-normal regression model that can fit the spectral data at low frequencies. However such approximation fails to fit the high frequency measurements which contain information about the details of the displacement distribution but have a low signal to noise ratio. In this paper, we directly work with the Rice noise model and cover the full range of bb-values. Using data augmentation to represent the likelihood, we reduce the non-linear regression problem to the framework of generalized linear models. Then we construct a Bayesian hierarchical model in order to perform simultaneously estimation and regularization of the tensor field. Finally the Bayesian paradigm is implemented by using Markov chain Monte Carlo.Comment: 37 pages, 3 figure

    Spatial based Expectation Maximizing (EM)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Expectation maximizing (EM) is one of the common approaches for image segmentation.</p> <p>Methods</p> <p>an improvement of the EM algorithm is proposed and its effectiveness for MRI brain image segmentation is investigated. In order to improve EM performance, the proposed algorithms incorporates neighbourhood information into the clustering process. At first, average image is obtained as neighbourhood information and then it is incorporated in clustering process. Also, as an option, user-interaction is used to improve segmentation results. Simulated and real MR volumes are used to compare the efficiency of the proposed improvement with the existing neighbourhood based extension for EM and FCM.</p> <p>Results</p> <p>the findings show that the proposed algorithm produces higher similarity index.</p> <p>Conclusions</p> <p>experiments demonstrate the effectiveness of the proposed algorithm in compare to other existing algorithms on various noise levels.</p

    Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images

    Get PDF
    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the FullWidth-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges

    BAYESIAN ENSEMBLE LEARNING FOR MEDICAL IMAGE DENOISING

    Get PDF
    Medical images are often affected by random noise because of both image acquisition from the medical modalities and image transmission from modalities to workspace in the main computer. Medical image denoising removes noise from the CT or MR images and it is an essential step that makes diagnosing more efficient. Many denoising algorithms have been introduced such as Non-local Means, Fields of Experts, and BM3D. In this thesis, we implement the Bayesian ensemble learning for not only natural image denoising but also medical image denoising. The Bayesian ensemble models are Non-local Means and Fields of Experts, the very successful recent algorithms. The Non-local Means presumes that the image contains an extensive amount of self-similarity. The approach of the Fields of Experts model extends traditional Markov Random Field model by learning potential functions over extended pixel neighborhoods. The two models are implemented, and image denoising is performed on both natural images and MR images. For MR images, we used two noise distributions, Gaussian and Rician. The experimental results obtained are used to compare with the single algorithm, and discuss the ensemble learning and their approaches

    Noise Estimation in Magnitude MR Datasets

    Get PDF
    Estimating the noise parameter in magnitude magnetic resonance (MR) images is important in a wide range of applications. We propose an automatic noise estimation method that does not rely on a substantial proportion of voxels being from the background. Specifically, we model the magnitude of the observed signal as a mixture of Rice distributions with common noise parameter. The Expectation-Maximization (EM) algorithm is used to estimate the parameters, including the common noise parameter. The algorithm needs initializing values for which we provide some strategies that work well. The number of components in the mixture model also need to be estimated en route to noise estimation and we provide a novel approach to doing so. Our methodology performs very well on a range of simulation experiments and physical phantom data. Finally, the methodology is demonstrated on four clinical datasets

    Fast and Robust Automatic Segmentation Methods for MR Images of Injured and Cancerous Tissues

    Get PDF
    Magnetic Resonance Imaging: MRI) is a key medical imaging technology. Through in vivo soft tissue imaging, MRI allows clinicians and researchers to make diagnoses and evaluations that were previously possible only through biopsy or autopsy. However, analysis of MR images by domain experts can be time-consuming, complex, and subject to bias. The development of automatic segmentation techniques that make use of robust statistical methods allows for fast and unbiased analysis of MR images. In this dissertation, I propose segmentation methods that fall into two classes---(a) segmentation via optimization of a parametric boundary, and: b) segmentation via multistep, spatially constrained intensity classification. These two approaches are applicable in different segmentation scenarios. Parametric boundary segmentation is useful and necessary for segmentation of noisy images where the tissue of interest has predictable shape but poor boundary delineation, as in the case of lung with heavy or diffuse tumor. Spatially constrained intensity classification is appropriate for segmentation of noisy images with moderate contrast between tissue regions, where the areas of interest have unpredictable shapes, as is the case in spinal injury and brain tumor. The proposed automated segmentation techniques address the need for MR image analysis in three specific applications:: 1) preclinical rodent studies of primary and metastatic lung cancer: approach: a)),: 2) preclinical rodent studies of spinal cord lesion: approach: b)), and: 3) postclinical analysis of human brain cancer: approach: b)). In preclinical rodent studies of primary and metastatic lung cancer, respiratory-gated MRI is used to quantitatively measure lung-tumor burden and monitor the time-course progression of individual tumors. I validate a method for measuring tumor burden based upon average lung-image intensity. The method requires accurate lung segmentation; toward this end, I propose an automated lung segmentation method that works for varying tumor burden levels. The method includes development of a novel, two-dimensional parametric model of the mouse lungs and a multifaceted cost function to optimally fit the model parameters to each image. Results demonstrate a strong correlation: 0.93), comparable with that of fully manual expert segmentation, between the automated method\u27s tumor-burden metric and the tumor burden measured by lung weight. In preclinical rodent studies of spinal cord lesion, MRI is used to quantify tissues in control and injured mouse spinal cords. For this application, I propose a novel, multistep, multidimensional approach, utilizing the Classification Expectation Maximization: CEM) algorithm, for automatic segmentation of spinal cord tissues. In contrast to previous methods, my proposed method incorporates prior knowledge of cord geometry and the distinct information contained in the different MR images gathered. Unlike previous approaches, the algorithm is shown to remain accurate for whole spinal cord, white matter, and hemorrhage segmentation, even in the presence of significant injury. The results of the method are shown to be on par with expert manual segmentation. In postclinical analysis of human brain cancer, access to large collections of MRI data enables scientifically rigorous study of cancers like glioblastoma multiforme, the most common form of malignant primary brain tumor. For this application, I propose an efficient and effective automated segmentation method, the Enhanced Classification Expectation Maximization: ECEM) algorithm. The ECEM algorithm is novel in that it introduces spatial information directly into the classical CEM algorithm, which is otherwise spatially unaware, with low additional computational complexity. I compare the ECEM\u27s performance on simulated data to the standard finite Gaussian mixture EM algorithm, which is not spatially aware, and to the hidden-Markov random field EM algorithm, a commonly-used spatially aware automated segmentation method for MR brain images. I also show sample results demonstrating the ECEM algorithm\u27s ability to segment MR images of glioblastoma

    Parameter optimization for local polynomial approximation based intersection confidence interval filter using genetic algorithm: an application for brain MRI image de-noising

    Get PDF
    Magnetic resonance imaging (MRI) is extensively exploited for more accuratepathological changes as well as diagnosis. Conversely, MRI suffers from variousshortcomings such as ambient noise from the environment, acquisition noise from theequipment, the presence of background tissue, breathing motion, body fat, etc.Consequently, noise reduction is critical as diverse types of the generated noise limit the efficiency of the medical image diagnosis. Local polynomial approximation basedintersection confidence interval (LPA-ICI) filter is one of the effective de-noising filters.This filter requires an adjustment of the ICI parameters for efficient window size selection.From the wide range of ICI parametric values, finding out the best set of tunes values is itselfan optimization problem. The present study proposed a novel technique for parameteroptimization of LPA-ICI filter using genetic algorithm (GA) for brain MR imagesde-noising. The experimental results proved that the proposed method outperforms theLPA-ICI method for de-noising in terms of various performance metrics for different noisevariance levels. Obtained results reports that the ICI parameter values depend on the noisevariance and the concerned under test image

    Probabilistic partial volume modelling of biomedical tomographic image data

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore