59 research outputs found

    Simultaneous segmentation and grading of anatomical structures for patient's classification: application to Alzheimer's Disease

    Full text link
    Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (www.loni.ucla.edu/ADNI).In this paper, we propose an innovative approach to robustly and accurately detect Alzheimer's disease (AD) based on the distinction of specific atrophic patterns of anatomical structures such as hippocampus (HC) and entorhinal cortex (EC). The proposed method simultaneously performs segmentation and grading of structures to efficiently capture the anatomical alterations caused by AD. Known as SNIPE (Scoring by Non-local Image Patch Estimator), the novel proposed grading measure is based on a nonlocal patch-based frame-work and estimates the similarity of the patch surrounding the voxel under study with all the patches present in different training populations. In this study, the training library was composed of two populations: 50 cognitively normal subjects (CN) and 50 patients with AD, randomly selected from the ADNI database. During our experiments, the classification accuracy of patients (CN vs. AD) using several biomarkers was compared: HC and EC volumes, the grade of these structures and finally the combination of their volume and their grade. Tests were completed in a leave-one-out framework using discriminant analysis. First, we showed that biomarkers based on HC provide better classification accuracy than biomarkers based on EC. Second, we demonstrated that structure grading is a more powerful measure than structure volume to distinguish both populations with a classification accuracy of 90%. Finally, by adding the ages of subjects in order to better separate age-related structural changes from disease-related anatomical alterations, SNIPE obtained a classification accuracy of 93%Data collection and sharing for this project were funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904). ADNI is funded by the National Insti- tute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Abbott, AstraZeneca AB, Bayer Schering Pharma AG, Bristol-Myers Squibb, Eisai Global Clinical Development, Elan Corporation, Genentech, GE Healthcare, GlaxoSmithKline, Innogenetics, Johnson and Johnson, Eli Lilly and Co., Medpace, Inc., Merck and Co., Inc., Novartis AG, Pfizer Inc, F. Hoffman-La Roche, Schering-Plough, Synarc, Inc., as well as non-profit partners the Alzheimer's Association and Alzheimer's Drug Discovery Foundation, with participation from the U.S. Food and Drug Administration. Private sector contributions to ADNI are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of California, Los Angeles. This research was also supported by NIH grants P30AG010129, K01 AG030514, and the Dana Foundation.Coupé, P.; Eskildsen, SF.; Manjón Herrera, JV.; Fonov, VS.; Collins, DL.; Alzheimer's Dis Neuroimaging (2012). Simultaneous segmentation and grading of anatomical structures for patient's classification: application to Alzheimer's Disease. NeuroImage. 59(4):3736-3747. https://doi.org/10.1016/j.neuroimage.2011.10.080S3736374759

    Development of Anatomical and Functional Magnetic Resonance Imaging Measures of Alzheimer Disease

    Get PDF
    Alzheimer disease is considered to be a progressive neurodegenerative condition, clinically characterized by cognitive dysfunction and memory impairments. Incorporating imaging biomarkers in the early diagnosis and monitoring of disease progression is increasingly important in the evaluation of novel treatments. The purpose of the work in this thesis was to develop and evaluate novel structural and functional biomarkers of disease to improve Alzheimer disease diagnosis and treatment monitoring. Our overarching hypothesis is that magnetic resonance imaging methods that sensitively measure brain structure and functional impairment have the potential to identify people with Alzheimer’s disease prior to the onset of cognitive decline. Since the hippocampus is considered to be one of the first brain structures affected by Alzheimer disease, in our first study a reliable and fully automated approach was developed to quantify medial temporal lobe atrophy using magnetic resonance imaging. This measurement of medial temporal lobe atrophy showed differences (pnovel biomarker of brain activity was developed based on a first-order textural feature of the resting state functional magnetic resonance imagining signal. The mean brain activity metric was shown to be significantly lower (pp18F labeled fluorodeoxyglucose positron emission tomography. In the final study, we examine whether combined measures of gait and cognition could predict medial temporal lobe atrophy over 18 months in a small cohort of people (N=22) with mild cognitive impairment. The results showed that measures of gait impairment can help to predict medial temporal lobe atrophy in people with mild cognitive impairment. The work in this thesis contributes to the growing evidence the specific magnetic resonance imaging measures of brain structure and function can be used to identify and monitor the progression of Alzheimer’s disease. Continued refinement of these methods, and larger longitudinal studies will be needed to establish whether the specific metrics of brain dysfunction developed in this thesis can be of clinical benefit and aid in drug development

    A generative probability model of joint label fusion for multi-atlas based brain segmentation

    Get PDF
    Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies

    Transfer learning by feature-space transformation: A method for Hippocampus segmentation across scanners

    Get PDF
    Many successful approaches in MR brain segmentation use supervised voxel classification, which requires manually labeled training images that are representative of the test images to segment. However, the performance of such methods often deteriorates if training and test images are acquired with different scanners or scanning parameters, since this leads to differences in feature representations between training and test data. In this paper we propose a feature-space transformation (FST) to overcome such differences in feature representations. The proposed FST is derived from unlabeled images of a subject that was scanned with both the source and the target scan protocol. After an affine registration, these images give a mapping between source and target voxels in the feature space. This mapping is then used to map all training samples to the feature representation of the test samples. We evaluated the benefit of the proposed FST on hippocampus segmentation. Experiments were performed on two datasets: one with relatively small differences between training and test images and one with large differences. In both cases, the FST significantly improved the performance compared to using only image normalization. Additionally, we showed that our FST can be used to improve the performance of a state-of-the-art patch-based-atlas-fusion technique in case of large differences between scanners

    Machine learning for efficient recognition of anatomical structures and abnormalities in biomedical images

    Get PDF
    Three studies have been carried out to investigate new approaches to efficient image segmentation and anomaly detection. The first study investigates the use of deep learning in patch based segmentation. Current approaches to patch based segmentation use low level features such as the sum of squared differences between patches. We argue that better segmentation can be achieved by harnessing the power of deep neural networks. Currently these networks make extensive use of convolutional layers. However, we argue that in the context of patch based segmentation, convolutional layers have little advantage over the canonical artificial neural network architecture. This is because a patch is small, and does not need decomposition and thus will not benefit from convolution. Instead, we make use of the canonical architecture in which neurons only compute dot products, but also incorporate modern techniques of deep learning. The resulting classifier is much faster and less memory-hungry than convolution based networks. In a test application to the segmentation of hippocampus in human brain MR images, we significantly outperformed prior art with a median Dice score up to 90.98% at a near real-time speed (<1s). The second study is an investigation into mouse phenotyping, and develops a high-throughput framework to detect morphological abnormality in mouse embryo micro-CT images. Existing work in this line is centred on, either the detection of phenotype-specific features or comparative analytics. The former approach lacks generality and the latter can often fail, for example, when the abnormality is not associated with severe volume variation. Both these approaches often require image segmentation as a pre-requisite, which is very challenging when applied to embryo phenotyping. A new approach to this problem in which non-rigid registration is combined with robust principal component analysis (RPCA), is proposed. The new framework is able to efficiently perform abnormality detection in a batch of images. It is sensitive to both volumetric and non-volumetric variations, and does not require image segmentation. In a validation study, it successfully distinguished the abnormal VSD and polydactyly phenotypes from the normal, respectively, at 85.19% and 88.89% specificities, with 100% sensitivity in both cases. The third study investigates the RPCA technique in more depth. RPCA is an extension of PCA that tolerates certain levels of data distortion during feature extraction, and is able to decompose images into regular and singular components. It has previously been applied to many computer vision problems (e.g. video surveillance), attaining excellent performance. However these applications commonly rest on a critical condition: in the majority of images being processed, there is a background with very little variation. By contrast in biomedical imaging there is significant natural variation across different images, resulting from inter-subject variability and physiological movements. Non-rigid registration can go some way towards reducing this variance, but cannot eliminate it entirely. To address this problem we propose a modified framework (RPCA-P) that is able to incorporate natural variation priors and adjust outlier tolerance locally, so that voxels associated with structures of higher variability are compensated with a higher tolerance in regularity estimation. An experimental study was applied to the same mouse embryo micro-CT data, and notably improved the detection specificity to 94.12% for the VSD and 90.97% for the polydactyly, while maintaining the sensitivity at 100%.Open Acces

    Fast and Sequence-Adaptive Whole-Brain Segmentation Using Parametric Bayesian Modeling

    Get PDF
    AbstractQuantitative analysis of magnetic resonance imaging (MRI) scans of the brain requires accurate automated segmentation of anatomical structures. A desirable feature for such segmentation methods is to be robust against changes in acquisition platform and imaging protocol. In this paper we validate the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable accuracy to state-of-the-art methods on T1-weighted scans while being one to two orders of magnitude faster. The proposed algorithm is also shown to be robust against small training datasets, and readily handles images with different MRI contrast as well as multi-contrast data

    Automated Morphometric Characterization of the Cerebral Cortex for the Developing and Ageing Brain

    Get PDF
    Morphometric characterisation of the cerebral cortex can provide information about patterns of brain development and ageing and may be relevant for diagnosis and estimation of the progression of diseases such as Alzheimer's, Huntington's, and schizophrenia. Therefore, understanding and describing the differences between populations in terms of structural volume, shape and thickness is of critical importance. Methodologically, due to data quality, presence of noise, PV effects, limited resolution and pathological variability, the automated, robust and time-consistent estimation of morphometric features is still an unsolved problem. This thesis focuses on the development of tools for robust cross-sectional and longitudinal morphometric characterisation of the human cerebral cortex. It describes techniques for tissue segmentation, structural and morphometric characterisation, cross-sectional and longitudinally cortical thickness estimation from serial MRI images in both adults and neonates. Two new probabilistic brain tissue segmentation techniques are introduced in order to accurately and robustly segment the brain of elderly and neonatal subjects, even in the presence of marked pathology. Two other algorithms based on the concept of multi-atlas segmentation propagation and fusion are also introduced in order to parcelate the brain into its multiple composing structures with the highest possible segmentation accuracy. Finally, we explore the use of the Khalimsky cubic complex framework for the extraction of topologically correct thickness measurements from probabilistic segmentations without explicit parametrisation of the edge. A longitudinal extension of this method is also proposed. The work presented in this thesis has been extensively validated on elderly and neonatal data from several scanners, sequences and protocols. The proposed algorithms have also been successfully applied to breast and heart MRI, neck and colon CT and also to small animal imaging. All the algorithms presented in this thesis are available as part of the open-source package NiftySeg

    Explainable deep learning classifiers for disease detection based on structural brain MRI data

    Get PDF
    In dieser Doktorarbeit wird die Frage untersucht, wie erfolgreich deep learning bei der Diagnostik von neurodegenerativen Erkrankungen unterstützen kann. In 5 experimentellen Studien wird die Anwendung von Convolutional Neural Networks (CNNs) auf Daten der Magnetresonanztomographie (MRT) untersucht. Ein Schwerpunkt wird dabei auf die Erklärbarkeit der eigentlich intransparenten Modelle gelegt. Mit Hilfe von Methoden der erklärbaren künstlichen Intelligenz (KI) werden Heatmaps erstellt, die die Relevanz einzelner Bildbereiche für das Modell darstellen. Die 5 Studien dieser Dissertation zeigen das Potenzial von CNNs zur Krankheitserkennung auf neurologischen MRT, insbesondere bei der Kombination mit Methoden der erklärbaren KI. Mehrere Herausforderungen wurden in den Studien aufgezeigt und Lösungsansätze in den Experimenten evaluiert. Über alle Studien hinweg haben CNNs gute Klassifikationsgenauigkeiten erzielt und konnten durch den Vergleich von Heatmaps zur klinischen Literatur validiert werden. Weiterhin wurde eine neue CNN Architektur entwickelt, spezialisiert auf die räumlichen Eigenschaften von Gehirn MRT Bildern.Deep learning and especially convolutional neural networks (CNNs) have a high potential of being implemented into clinical decision support software for tasks such as diagnosis and prediction of disease courses. This thesis has studied the application of CNNs on structural MRI data for diagnosing neurological diseases. Specifically, multiple sclerosis and Alzheimer’s disease were used as classification targets due to their high prevalence, data availability and apparent biomarkers in structural MRI data. The classification task is challenging since pathology can be highly individual and difficult for human experts to detect and due to small sample sizes, which are caused by the high acquisition cost and sensitivity of medical imaging data. A roadblock in adopting CNNs to clinical practice is their lack of interpretability. Therefore, after optimizing the machine learning models for predictive performance (e.g. balanced accuracy), we have employed explainability methods to study the reliability and validity of the trained models. The deep learning models achieved good predictive performance of over 87% balanced accuracy on all tasks and the explainability heatmaps showed coherence with known clinical biomarkers for both disorders. Explainability methods were compared quantitatively using brain atlases and shortcomings regarding their robustness were revealed. Further investigations showed clear benefits of transfer-learning and image registration on the model performance. Lastly, a new CNN layer type was introduced, which incorporates a prior on the spatial homogeneity of neuro-MRI data. CNNs excel when used on natural images which possess spatial heterogeneity, and even though MRI data and natural images share computational similarities, the composition and orientation of neuro-MRI is very distinct. The introduced patch-individual filter (PIF) layer breaks the assumption of spatial invariance of CNNs and reduces convergence time on different data sets without reducing predictive performance. The presented work highlights many challenges that CNNs for disease diagnosis face on MRI data and defines as well as tests strategies to overcome those
    corecore