493 research outputs found

    Unsupervised brain anomaly detection in MR images

    Get PDF
    Brain disorders are characterized by morphological deformations in shape and size of (sub)cortical structures in one or both hemispheres. These deformations cause deviations from the normal pattern of brain asymmetries, resulting in asymmetric lesions that directly affect the patient’s condition. Unsupervised methods aim to learn a model from unlabeled healthy images, so that an unseen image that breaks priors of this model, i.e., an outlier, is considered an anomaly. Consequently, they are generic in detecting any lesions, e.g., coming from multiple diseases, as long as these notably differ from healthy training images. This thesis addresses the development of solutions to leverage unsupervised machine learning for the detection/analysis of abnormal brain asymmetries related to anomalies in magnetic resonance (MR) images. First, we propose an automatic probabilistic-atlas-based approach for anomalous brain image segmentation. Second, we explore an automatic method for the detection of abnormal hippocampi from abnormal asymmetries based on deep generative networks and a one-class classifier. Third, we present a more generic framework to detect abnormal asymmetries in the entire brain hemispheres. Our approach extracts pairs of symmetric regions — called supervoxels — in both hemispheres of a test image under study. One-class classifiers then analyze the asymmetries present in each pair. Experimental results on 3D MR-T1 images from healthy subjects and patients with a variety of lesions show the effectiveness and robustness of the proposed unsupervised approaches for brain anomaly detection

    The 5th International Conference on Biomedical Engineering and Biotechnology (ICBEB 2016)

    Get PDF

    A novel diffusion tensor imaging-based computer-aided diagnostic system for early diagnosis of autism.

    Get PDF
    Autism spectrum disorders (ASDs) denote a significant growing public health concern. Currently, one in 68 children has been diagnosed with ASDs in the United States, and most children are diagnosed after the age of four, despite the fact that ASDs can be identified as early as age two. The ultimate goal of this thesis is to develop a computer-aided diagnosis (CAD) system for the accurate and early diagnosis of ASDs using diffusion tensor imaging (DTI). This CAD system consists of three main steps. First, the brain tissues are segmented based on three image descriptors: a visual appearance model that has the ability to model a large dimensional feature space, a shape model that is adapted during the segmentation process using first- and second-order visual appearance features, and a spatially invariant second-order homogeneity descriptor. Secondly, discriminatory features are extracted from the segmented brains. Cortex shape variability is assessed using shape construction methods, and white matter integrity is further examined through connectivity analysis. Finally, the diagnostic capabilities of these extracted features are investigated. The accuracy of the presented CAD system has been tested on 25 infants with a high risk of developing ASDs. The preliminary diagnostic results are promising in identifying autistic from control patients

    A CAD system for early diagnosis of autism using different imaging modalities.

    Get PDF
    The term “autism spectrum disorder” (ASD) refers to a collection of neuro-developmental disorders that affect linguistic, behavioral, and social skills. Autism has many symptoms, most prominently, social impairment and repetitive behaviors. It is crucial to diagnose autism at an early stage for better assessment and investigation of this complex syndrome. There have been a lot of efforts to diagnose ASD using different techniques, such as imaging modalities, genetic techniques, and behavior reports. Imaging modalities have been extensively exploited for ASD diagnosis, and one of the most successful ones is Magnetic resonance imaging(MRI),where it has shown promise for the early diagnosis of the ASD related abnormalities in particular. Magnetic resonance imaging (MRI) modalities have emerged as powerful means that facilitate non-invasive clinical diagnostics of various diseases and abnormalities since their inception in the 1980s. After the advent in the nineteen eighties, MRI soon became one of the most promising non- invasive modalities for visualization and diagnostics of ASD-related abnormalities. Along with its main advantage of no exposure to radiation, high contrast, and spatial resolution, the recent advances to MRI modalities have notably increased diagnostic certainty. Multiple MRI modalities, such as different types of structural MRI (sMRI) that examines anatomical changes, and functional MRI (fMRI) that examines brain activity by monitoring blood flow changes,have been employed to investigate facets of ASD in order to better understand this complex syndrome. This work aims at developing a new computer-aided diagnostic (CAD) system for autism diagnosis using different imaging modalities. It mainly relies on making use of structural magnetic resonance images for extracting notable shape features from parts of the brainthat proved to correlate with ASD from previous neuropathological studies. Shape features from both the cerebral cortex (Cx) and cerebral white matter(CWM)are extracted. Fusion of features from these two structures is conducted based on the recent findings suggesting that Cx changes in autism are related to CWM abnormalities. Also, when fusing features from more than one structure, this would increase the robustness of the CAD system. Moreover, fMRI experiments are done and analyzed to find areas of activation in the brains of autistic and typically developing individuals that are related to a specific task. All sMRI findings are fused with those of fMRI to better understand ASD in terms of both anatomy and functionality,and thus better classify the two groups. This is one aspect of the novelty of this CAD system, where sMRI and fMRI studies are both applied on subjects from different ages to diagnose ASD. In order to build such a CAD system, three main blocks are required. First, 3D brain segmentation is applied using a novel hybrid model that combines shape, intensity, and spatial information. Second, shape features from both Cx and CWM are extracted and anf MRI reward experiment is conducted from which areas of activation that are related to the task of this experiment are identified. Those features were extracted from local areas of the brain to provide an accurate analysis of ASD and correlate it with certain anatomical areas. Third and last, fusion of all the extracted features is done using a deep-fusion classification network to perform classification and obtain the diagnosis report. Fusing features from all modalities achieved a classification accuracy of 94.7%, which emphasizes the significance of combining structures/modalities for ASD diagnosis. To conclude, this work could pave the pathway for better understanding of the autism spectrum by finding local areas that correlate to the disease. The idea of personalized medicine is emphasized in this work, where the proposed CAD system holds the promise to resolve autism endophenotypes and help clinicians deliver personalized treatment to individuals affected with this complex syndrome

    Machine Learning Based Autism Detection Using Brain Imaging

    Get PDF
    Autism Spectrum Disorder (ASD) is a group of heterogeneous developmental disabilities that manifest in early childhood. Currently, ASD is primarily diagnosed by assessing the behavioral and intellectual abilities of a child. This behavioral diagnosis can be subjective, time consuming, inconclusive, does not provide insight on the underlying etiology, and is not suitable for early detection. Diagnosis based on brain magnetic resonance imaging (MRI)—a widely used non- invasive tool—can be objective, can help understand the brain alterations in ASD, and can be suitable for early diagnosis. However, the brain morphological findings in ASD from MRI studies have been inconsistent. Moreover, there has been limited success in machine learning based ASD detection using MRI derived brain features. In this thesis, we begin by demonstrating that the low success in ASD detection and the inconsistent findings are likely attributable to the heterogeneity of brain alterations in ASD. We then show that ASD detection can be significantly improved by mitigating the heterogeneity with the help of behavioral and demographics information. Here we demonstrate that finding brain markers in well-defined sub-groups of ASD is easier and more insightful than identifying markers across the whole spectrum. Finally, our study focused on brain MRI of a pediatric cohort (3 to 4 years) and achieved a high classification success (AUC of 95%). Results of this study indicate three main alterations in early ASD brains: 1) abnormally large ventricles, 2) highly folded cortices, and 3) low image intensity in white matter regions suggesting myelination deficits indicative of decreased structural connectivity. Results of this thesis demonstrate that the meaningful brain markers of ASD can be extracted by applying machine learning techniques on brain MRI data. This data-driven technique can be a powerful tool for early detection and understanding brain anatomical underpinnings of ASD

    A Parameter-Efficient Deep Dense Residual Convolutional Neural Network for Volumetric Brain Tissue Segmentation from Magnetic Resonance Images

    Get PDF
    Brain tissue segmentation is a common medical image processing problem that deals with identifying a region of interest in the human brain from medical scans. It is a fundamental step towards neuroscience research and clinical diagnosis. Magnetic resonance (MR) images are widely used for segmentation in view of their non-invasive acquisition, and high spatial resolution and various contrast information. Accurate segmentation of brain tissues from MR images is very challenging due to the presence of motion artifacts, low signal-to-noise ratio, intensity overlaps, and intra- and inter-subject variability. Convolutional neural networks (CNNs) recently employed for segmentation provide remarkable advantages over the traditional and manual segmentation methods, however, their complex architectures and the large number of parameters make them computationally expensive and difficult to optimize. In this thesis, a novel learning-based algorithm using a three-dimensional deep convolutional neural network is proposed for efficient parameter reduction and compact feature representation to learn end-to-end mapping of T1-weighted (T1w) and/or T2-weighted (T2w) brain MR images to the probability scores of each voxel belonging to the different labels of brain tissues, namely, white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) for segmentation. The basic idea in the proposed method is to use densely connected convolutional layers and residual skip-connections to increase representation capacity, facilitate better gradient flow, improve learning, and significantly reduce the number of parameters in the network. The network is independently trained on three different loss functions, cross-entropy, dice similarity, and a combination of the two and the results are compared with each other to investigate better loss function for the training. The model has the number of network parameters reduced by a significant amount compared to that of the state-of-the-art methods in brain tissue segmentation. Experiments are performed using the single-modality IBSR18 dataset containing high-resolution T1-weighted MR scans of diverse age groups, and the multi-modality iSeg-2017 dataset containing T1w and T2w MR scans of infants. It is shown that the proposed method provides the best performance on the test sets of both datasets amongst all the existing deep-learning based methods for brain tissue segmentation using the MR images and achieves competitive performance in the iSeg-2017 challenge with the number of parameters that is 47% to 98% lower than that of the other deep-learning based architectures

    Uma abordagem de agrupamento baseada na técnica de divisão e conquista e floresta de caminhos ótimos

    Get PDF
    Orientador: Alexandre Xavier FalcãoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O agrupamento de dados é um dos principais desafios em problemas de Ciência de Dados. Apesar do seu progresso científico em quase um século de existência, algoritmos de agrupamento ainda falham na identificação de grupos (clusters) naturalmente relacionados com a semântica do problema. Ademais, os avanços das tecnologias de aquisição, comunicação, e armazenamento de dados acrescentam desafios cruciais com o aumento considerável de dados, os quais não são tratados pela maioria das técnicas. Essas questões são endereçadas neste trabalho através da proposta de uma abordagem de divisão e conquista para uma técnica de agrupamento única em encontrar um grupo por domo da função de densidade de probabilidade dos dados --- o algoritmo de agrupamento por floresta de caminhos ótimos (OPF - Optimum-Path Forest). Nesta técnica, amostras são interpretadas como nós de um grafo cujos arcos conectam os kk-vizinhos mais próximos no espaço de características. Os nós são ponderados pela sua densidade de probabilidade e um mapa de conexidade é maximizado de modo que cada máximo da função densidade de probabilidade se torna a raiz de uma árvore de caminhos ótimos (grupo). O melhor valor de kk é estimado por otimização em um intervalo de valores dependente da aplicação. O problema com este método é que um número alto de amostras torna o algoritmo inviável, devido ao espaço de memória necessário para armazenar o grafo e o tempo computacional para encontrar o melhor valor de kk. Visto que as soluções existentes levam a resultados ineficazes, este trabalho revisita o problema através da proposta de uma abordagem de divisão e conquista com dois níveis. No primeiro nível, o conjunto de dados é dividido em subconjuntos (blocos) menores e as amostras pertencentes a cada bloco são agrupadas pelo algoritmo OPF. Em seguida, as amostras representativas de cada grupo (mais especificamente as raízes da floresta de caminhos ótimos) são levadas ao segundo nível, onde elas são agrupadas novamente. Finalmente, os rótulos de grupo obtidos no segundo nível são transferidos para todas as amostras do conjunto de dados através de seus representantes do primeiro nível. Nesta abordagem, todas as amostras, ou pelo menos muitas delas, podem ser usadas no processo de aprendizado não supervisionado, sem afetar a eficácia do agrupamento e, portanto, o procedimento é menos susceptível a perda de informação relevante ao agrupamento. Os resultados mostram agrupamentos satisfatórios em dois cenários, segmentação de imagem e agrupamento de dados arbitrários, tendo como base a comparação com abordagens populares. No primeiro cenário, a abordagem proposta atinge os melhores resultados em todas as bases de imagem testadas. No segundo cenário, os resultados são similares aos obtidos por uma versão otimizada do método original de agrupamento por floresta de caminhos ótimosAbstract: Data clustering is one of the main challenges when solving Data Science problems. Despite its progress over almost one century of research, clustering algorithms still fail in identifying groups naturally related to the semantics of the problem. Moreover, the advances in data acquisition, communication, and storage technologies add crucial challenges with a considerable data increase, which are not handled by most techniques. We address these issues by proposing a divide-and-conquer approach to a clustering technique, which is unique in finding one group per dome of the probability density function of the data --- the Optimum-Path Forest (OPF) clustering algorithm. In the OPF-clustering technique, samples are taken as nodes of a graph whose arcs connect the kk-nearest neighbors in the feature space. The nodes are weighted by their probability density values and a connectivity map is maximized such that each maximum of the probability density function becomes the root of an optimum-path tree (cluster). The best value of kk is estimated by optimization within an application-specific interval of values. The problem with this method is that a high number of samples makes the algorithm prohibitive, due to the required memory space to store the graph and the computational time to obtain the clusters for the best value of kk. Since the existing solutions lead to ineffective results, we decided to revisit the problem by proposing a two-level divide-and-conquer approach. At the first level, the dataset is divided into smaller subsets (blocks) and the samples belonging to each block are grouped by the OPF algorithm. Then, the representative samples (more specifically the roots of the optimum-path forest) are taken to a second level where they are clustered again. Finally, the group labels obtained in the second level are transferred to all samples of the dataset through their representatives of the first level. With this approach, we can use all samples, or at least many samples, in the unsupervised learning process without affecting the grouping performance and, therefore, the procedure is less likely to lose relevant grouping information. We show that our proposal can obtain satisfactory results in two scenarios, image segmentation and the general data clustering problem, in comparison with some popular baselines. In the first scenario, our technique achieves better results than the others in all tested image databases. In the second scenario, it obtains outcomes similar to an optimized version of the traditional OPF-clustering algorithmMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    Supervised learning-based multimodal MRI brain image analysis

    Get PDF
    Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the III single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy
    corecore