364 research outputs found

    Spatial based Expectation Maximizing (EM)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Expectation maximizing (EM) is one of the common approaches for image segmentation.</p> <p>Methods</p> <p>an improvement of the EM algorithm is proposed and its effectiveness for MRI brain image segmentation is investigated. In order to improve EM performance, the proposed algorithms incorporates neighbourhood information into the clustering process. At first, average image is obtained as neighbourhood information and then it is incorporated in clustering process. Also, as an option, user-interaction is used to improve segmentation results. Simulated and real MR volumes are used to compare the efficiency of the proposed improvement with the existing neighbourhood based extension for EM and FCM.</p> <p>Results</p> <p>the findings show that the proposed algorithm produces higher similarity index.</p> <p>Conclusions</p> <p>experiments demonstrate the effectiveness of the proposed algorithm in compare to other existing algorithms on various noise levels.</p

    Brain MR Image Segmentation Based on an Adaptive Combination of Global and Local Fuzzy Energy

    Get PDF
    This paper presents a novel fuzzy algorithm for segmentation of brain MR images and simultaneous estimation of intensity inhomogeneity. The proposed algorithm defines an objective function including a local fuzzy energy and a global fuzzy energy. Based on the assumption that the local image intensities belonging to each different tissue satisfy Gaussian distributions with different means, we derive the local fuzzy energy by utilizing maximum a posterior probability (MAP) and Bayes rule. The global fuzzy energy is defined by measuring the distance between the original image and the corresponding inhomogeneity-free image. We combine the global fuzzy energy with the local fuzzy energy using an adaptive weight function whose value varies with the local contrast of the image. This combination enables the proposed algorithm to address intensity inhomogeneity and to improve the accuracy of segmentation and its robustness to initialization. Besides, the proposed algorithm incorporates neighborhood spatial information into the membership function to reduce the impact of noise. Experimental results for synthetic and real images validate the desirable performances of the proposed algorithm

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    Multigrid Nonlocal Gaussian Mixture Model for Segmentation of Brain Tissues in Magnetic Resonance Images

    Get PDF

    Multimodal image analysis of the human brain

    Get PDF
    Gedurende de laatste decennia heeft de snelle ontwikkeling van multi-modale en niet-invasieve hersenbeeldvorming technologieën een revolutie teweeg gebracht in de mogelijkheid om de structuur en functionaliteit van de hersens te bestuderen. Er is grote vooruitgang geboekt in het beoordelen van hersenschade door gebruik te maken van Magnetic Reconance Imaging (MRI), terwijl Elektroencefalografie (EEG) beschouwd wordt als de gouden standaard voor diagnose van neurologische afwijkingen. In deze thesis focussen we op de ontwikkeling van nieuwe technieken voor multi-modale beeldanalyse van het menselijke brein, waaronder MRI segmentatie en EEG bronlokalisatie. Hierdoor voegen we theorie en praktijk samen waarbij we focussen op twee medische applicaties: (1) automatische 3D MRI segmentatie van de volwassen hersens en (2) multi-modale EEG-MRI data analyse van de hersens van een pasgeborene met perinatale hersenschade. We besteden veel aandacht aan de verbetering en ontwikkeling van nieuwe methoden voor accurate en ruisrobuuste beeldsegmentatie, dewelke daarna succesvol gebruikt worden voor de segmentatie van hersens in MRI van zowel volwassen als pasgeborenen. Daarenboven ontwikkelden we een geïntegreerd multi-modaal methode voor de EEG bronlokalisatie in de hersenen van een pasgeborene. Deze lokalisatie wordt gebruikt voor de vergelijkende studie tussen een EEG aanval bij pasgeborenen en acute perinatale hersenletsels zichtbaar in MRI

    Fully automated 3D segmentation of dopamine transporter SPECT images using an estimation-based approach

    Full text link
    Quantitative measures of uptake in caudate, putamen, and globus pallidus in dopamine transporter (DaT) brain SPECT have potential as biomarkers for the severity of Parkinson disease. Reliable quantification of uptake requires accurate segmentation of these regions. However, segmentation is challenging in DaT SPECT due to partial-volume effects, system noise, physiological variability, and the small size of these regions. To address these challenges, we propose an estimation-based approach to segmentation. This approach estimates the posterior mean of the fractional volume occupied by caudate, putamen, and globus pallidus within each voxel of a 3D SPECT image. The estimate is obtained by minimizing a cost function based on the binary cross-entropy loss between the true and estimated fractional volumes over a population of SPECT images, where the distribution of the true fractional volumes is obtained from magnetic resonance images from clinical populations. The proposed method accounts for both the sources of partial-volume effects in SPECT, namely the limited system resolution and tissue-fraction effects. The method was implemented using an encoder-decoder network and evaluated using realistic clinically guided SPECT simulation studies, where the ground-truth fractional volumes were known. The method significantly outperformed all other considered segmentation methods and yielded accurate segmentation with dice similarity coefficients of ~ 0.80 for all regions. The method was relatively insensitive to changes in voxel size. Further, the method was relatively robust up to +/- 10 degrees of patient head tilt along transaxial, sagittal, and coronal planes. Overall, the results demonstrate the efficacy of the proposed method to yield accurate fully automated segmentation of caudate, putamen, and globus pallidus in 3D DaT-SPECT images

    Bridging generative models and Convolutional Neural Networks for domain-agnostic segmentation of brain MRI

    Get PDF
    Segmentation of brain MRI scans is paramount in neuroimaging, as it is a prerequisite for many subsequent analyses. Although manual segmentation is considered the gold standard, it suffers from severe reproducibility issues, and is extremely tedious, which limits its application to large datasets. Therefore, there is a clear need for automated tools that enable fast and accurate segmentation of brain MRI scans. Recent methods rely on convolutional neural networks (CNNs). While CNNs obtain accurate results on their training domain, they are highly sensitive to changes in resolution and MRI contrast. Although data augmentation and domain adaptation techniques can increase the generalisability of CNNs, these methods still need to be retrained for every new domain, which requires costly labelling of images. Here, we present a learning strategy to make CNNs agnostic to MRI contrast, resolution, and numerous artefacts. Specifically, we train a network with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation approach where all generation parameters are drawn for each example from uniform priors. As a result, the network is forced to learn domain-agnostic features, and can segment real test scans without retraining. The proposed method almost achieves the accuracy of supervised CNNs on their training domain, and substantially outperforms state-of-the-art domain adaptation methods. Finally, based on this learning strategy, we present a segmentation suite for robust analysis of heterogeneous clinical scans. Overall, our approach unlocks the development of morphometry on millions of clinical scans, which ultimately has the potential to improve the diagnosis and characterisation of neurological disorders
    corecore