564 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    Quantitative phase imaging of cells by Digital Holographic Microscopy

    Get PDF
    We constructed digital holographic microscopy (DHM) setup for extracting the quantitative phase information of biological cells. Here we record the digital hologram of the object and perform computational reconstruction. The hologram recording is carried out on a CCD camera. CCD camera will digitize the information hence the method is known as ‘Digital Holographic Microscopy’. From the quantitative phase information, we can calculate the specimen (cell) thickness and volume. This method is advantageous compared to the existing techniques like bright field microscopy, phase contrast microscopy, differential interference contrast and other qualitative phase imaging techniques since they cannot give us exact phase information. In addition, this method is very attractive for live cell imaging as it does not require any contrast agents. In order to improve the resolution and field of view, the principle of Synthetic Apertures (SA) has been applied by moving the CCD camera to 9 positions and the acquired digital holograms were stitched together to increase the field of view with 22 Kilo pixels. We performed 3-D image reconstructions of a transparent ITO electrode. DHM being a quantitative phase imaging technique, could estimate the height and thickness of the ITO electrode. In order to show the improvement in resolution using synthetic apertures, we have imaged the USAF resolution chart. We have shown that the amplitude reconstruction of the USAF resolution chart has given better resolution in synthetic aperture digital holographic microscopy (SA-DHM) compared to DHM. We reconstructed 3 dimensional structure of an E.coli bacteria using SA-DHM and quantified its length and thickness

    Alzheimer’s disease detection from magnetic resonance imaging: a deep learning perspective

    Get PDF
    Aim: Up to date many successful attempts to identify various types of lesions with machine learning (ML) were made, however, the recognition of Alzheimer’s disease (AD) from brain images and interpretation of the models is still a topic for the research. Here, using AD Imaging Initiative (ADNI) structural magnetic resonance imaging (MRI) brain images, the scope of this work was to find an optimal artificial neural network architecture for multiclass classification in AD, circumventing the dozens of images pre-processing steps and avoiding to increase the computational complexity. Methods: For this analysis, two supervised deep neural network (DNN) models were used, a three-dimensional 16-layer visual geometry-group (3D-VGG-16) standard convolutional network (CNN) and a three-dimensional residual network (ResNet3D) on the T1-weighted, 1.5 T ADNI MRI brain images that were divided into three groups: cognitively normal (CN), mild cognitive impairment (MCI), and AD. The minimal pre-processing procedure of the images was applied before training the two networks. Results: Results achieved suggest, that the network ResNet3D has a better performance in class prediction, which is higher than 90% in training set accuracy and arrives to 85% in validation set accuracy. ResNet3D also showed requiring less computational power than the 3D-VGG-16 network. The emphasis is also given to the fact that this result was achieved from raw images, applying minimal image preparation for the network. Conclusions: In this work, it has been shown that ResNet3D might have superiority over the other CNN models in the ability to classify high-complexity images. The prospective stands in doing a step further in creating an expert system based on residual DNNs for better brain image classification performance in AD detection

    An automated system for the classification and segmentation of brain tumours in MRI images based on the modified grey level co-occurrence matrix

    Get PDF
    The development of an automated system for the classification and segmentation of brain tumours in MRI scans remains challenging due to high variability and complexity of the brain tumours. Visual examination of MRI scans to diagnose brain tumours is the accepted standard. However due to the large number of MRI slices that are produced for each patient this is becoming a time consuming and slow process that is also prone to errors. This study explores an automated system for the classification and segmentation of brain tumours in MRI scans based on texture feature extraction. The research investigates an appropriate technique for feature extraction and development of a three-dimensional segmentation method. This was achieved by the investigation and integration of several image processing methods that are related to texture features and segmentation of MRI brain scans. First, the MRI brain scans were pre-processed by image enhancement, intensity normalization, background segmentation and correcting the mid-sagittal plane (MSP) of the brain for any possible skewness in the patient’s head. Second, the texture features were extracted using modified grey level co-occurrence matrix (MGLCM) from T2-weighted (T2-w) MRI slices and classified into normal and abnormal using multi-layer perceptron neural network (MLP). The texture feature extraction method starts from the standpoint that the human brain structure is approximately symmetric around the MSP of the brain. The extracted features measure the degree of symmetry between the left and right hemispheres of the brain, which are used to detect the abnormalities in the brain. This will enable clinicians to reject the MRI brain scans of the patients who have normal brain quickly and focusing on those who have pathological brain features. Finally, the bounding 3D-boxes based genetic algorithm (BBBGA) was used to identify the location of the brain tumour and segments it automatically by using three-dimensional active contour without edge (3DACWE) method. The research was validated using two datasets; a real dataset that was collected from the MRI Unit in Al-Kadhimiya Teaching Hospital in Iraq in 2014 and the standard benchmark multimodal brain tumour segmentation (BRATS 2013) dataset. The experimental results on both datasets proved that the efficacy of the proposed system in the successful classification and segmentation of the brain tumours in MRI scans. The achieved classification accuracies were 97.8% for the collected dataset and 98.6% for the standard dataset. While the segmentation’s Dice scores were 89% for the collected dataset and 89.3% for the standard dataset
    • …
    corecore