893 research outputs found

    Atlas-Guided Segmentation of Vervet Monkey Brain MRI

    Get PDF
    The vervet monkey is an important nonhuman primate model that allows the study of isolated environmental factors in a controlled environment. Analysis of monkey MRI often suffers from lower quality images compared with human MRI because clinical equipment is typically used to image the smaller monkey brain and higher spatial resolution is required. This, together with the anatomical differences of the monkey brains, complicates the use of neuroimage analysis pipelines tuned for human MRI analysis. In this paper we developed an open source image analysis framework based on the tools available within the 3D Slicer software to support a biological study that investigates the effect of chronic ethanol exposure on brain morphometry in a longitudinally followed population of male vervets. We first developed a computerized atlas of vervet monkey brain MRI, which was used to encode the typical appearance of the individual brain structures in MRI and their spatial distribution. The atlas was then used as a spatial prior during automatic segmentation to process two longitudinal scans per subject. Our evaluation confirms the consistency and reliability of the automatic segmentation. The comparison of atlas construction strategies reveals that the use of a population-specific atlas leads to improved accuracy of the segmentation for subcortical brain structures. The contribution of this work is twofold. First, we describe an image processing workflow specifically tuned towards the analysis of vervet MRI that consists solely of the open source software tools. Second, we develop a digital atlas of vervet monkey brain MRIs to enable similar studies that rely on the vervet model

    Trustworthy Deep Learning for Medical Image Segmentation

    Full text link
    Despite the recent success of deep learning methods at achieving new state-of-the-art accuracy for medical image segmentation, some major limitations are still restricting their deployment into clinics. One major limitation of deep learning-based segmentation methods is their lack of robustness to variability in the image acquisition protocol and in the imaged anatomy that were not represented or were underrepresented in the training dataset. This suggests adding new manually segmented images to the training dataset to better cover the image variability. However, in most cases, the manual segmentation of medical images requires highly skilled raters and is time-consuming, making this solution prohibitively expensive. Even when manually segmented images from different sources are available, they are rarely annotated for exactly the same regions of interest. This poses an additional challenge for current state-of-the-art deep learning segmentation methods that rely on supervised learning and therefore require all the regions of interest to be segmented for all the images to be used for training. This thesis introduces new mathematical and optimization methods to mitigate those limitations.Comment: PhD thesis successfully defended on 1st July 2022. Examiners: Prof Sotirios Tsaftaris and Dr Wenjia Ba

    A CAD system for early diagnosis of autism using different imaging modalities.

    Get PDF
    The term “autism spectrum disorder” (ASD) refers to a collection of neuro-developmental disorders that affect linguistic, behavioral, and social skills. Autism has many symptoms, most prominently, social impairment and repetitive behaviors. It is crucial to diagnose autism at an early stage for better assessment and investigation of this complex syndrome. There have been a lot of efforts to diagnose ASD using different techniques, such as imaging modalities, genetic techniques, and behavior reports. Imaging modalities have been extensively exploited for ASD diagnosis, and one of the most successful ones is Magnetic resonance imaging(MRI),where it has shown promise for the early diagnosis of the ASD related abnormalities in particular. Magnetic resonance imaging (MRI) modalities have emerged as powerful means that facilitate non-invasive clinical diagnostics of various diseases and abnormalities since their inception in the 1980s. After the advent in the nineteen eighties, MRI soon became one of the most promising non- invasive modalities for visualization and diagnostics of ASD-related abnormalities. Along with its main advantage of no exposure to radiation, high contrast, and spatial resolution, the recent advances to MRI modalities have notably increased diagnostic certainty. Multiple MRI modalities, such as different types of structural MRI (sMRI) that examines anatomical changes, and functional MRI (fMRI) that examines brain activity by monitoring blood flow changes,have been employed to investigate facets of ASD in order to better understand this complex syndrome. This work aims at developing a new computer-aided diagnostic (CAD) system for autism diagnosis using different imaging modalities. It mainly relies on making use of structural magnetic resonance images for extracting notable shape features from parts of the brainthat proved to correlate with ASD from previous neuropathological studies. Shape features from both the cerebral cortex (Cx) and cerebral white matter(CWM)are extracted. Fusion of features from these two structures is conducted based on the recent findings suggesting that Cx changes in autism are related to CWM abnormalities. Also, when fusing features from more than one structure, this would increase the robustness of the CAD system. Moreover, fMRI experiments are done and analyzed to find areas of activation in the brains of autistic and typically developing individuals that are related to a specific task. All sMRI findings are fused with those of fMRI to better understand ASD in terms of both anatomy and functionality,and thus better classify the two groups. This is one aspect of the novelty of this CAD system, where sMRI and fMRI studies are both applied on subjects from different ages to diagnose ASD. In order to build such a CAD system, three main blocks are required. First, 3D brain segmentation is applied using a novel hybrid model that combines shape, intensity, and spatial information. Second, shape features from both Cx and CWM are extracted and anf MRI reward experiment is conducted from which areas of activation that are related to the task of this experiment are identified. Those features were extracted from local areas of the brain to provide an accurate analysis of ASD and correlate it with certain anatomical areas. Third and last, fusion of all the extracted features is done using a deep-fusion classification network to perform classification and obtain the diagnosis report. Fusing features from all modalities achieved a classification accuracy of 94.7%, which emphasizes the significance of combining structures/modalities for ASD diagnosis. To conclude, this work could pave the pathway for better understanding of the autism spectrum by finding local areas that correlate to the disease. The idea of personalized medicine is emphasized in this work, where the proposed CAD system holds the promise to resolve autism endophenotypes and help clinicians deliver personalized treatment to individuals affected with this complex syndrome

    Fast and robust hybrid framework for infant brain classification from structural MRI : a case study for early diagnosis of autism.

    Get PDF
    The ultimate goal of this work is to develop a computer-aided diagnosis (CAD) system for early autism diagnosis from infant structural magnetic resonance imaging (MRI). The vital step to achieve this goal is to get accurate segmentation of the different brain structures: whitematter, graymatter, and cerebrospinal fluid, which will be the main focus of this thesis. The proposed brain classification approach consists of two major steps. First, the brain is extracted based on the integration of a stochastic model that serves to learn the visual appearance of the brain texture, and a geometric model that preserves the brain geometry during the extraction process. Secondly, the brain tissues are segmented based on shape priors, built using a subset of co-aligned training images, that is adapted during the segmentation process using first- and second-order visual appearance features of infant MRIs. The accuracy of the presented segmentation approach has been tested on 300 infant subjects and evaluated blindly on 15 adult subjects. The experimental results have been evaluated by the MICCAI MR Brain Image Segmentation (MRBrainS13) challenge organizers using three metrics: Dice coefficient, 95-percentile Hausdorff distance, and absolute volume difference. The proposed method has been ranked the first in terms of performance and speed

    The ENIGMA Stroke Recovery Working Group: Big data neuroimaging to study brain–behavior relationships after stroke

    Get PDF
    The goal of the Enhancing Neuroimaging Genetics through Meta‐Analysis (ENIGMA) Stroke Recovery working group is to understand brain and behavior relationships using well‐powered meta‐ and mega‐analytic approaches. ENIGMA Stroke Recovery has data from over 2,100 stroke patients collected across 39 research studies and 10 countries around the world, comprising the largest multisite retrospective stroke data collaboration to date. This article outlines the efforts taken by the ENIGMA Stroke Recovery working group to develop neuroinformatics protocols and methods to manage multisite stroke brain magnetic resonance imaging, behavioral and demographics data. Specifically, the processes for scalable data intake and preprocessing, multisite data harmonization, and large‐scale stroke lesion analysis are described, and challenges unique to this type of big data collaboration in stroke research are discussed. Finally, future directions and limitations, as well as recommendations for improved data harmonization through prospective data collection and data management, are provided

    Brain segmentation based on multi-atlas guided 3D fully convolutional network ensembles

    Full text link
    In this study, we proposed and validated a multi-atlas guided 3D fully convolutional network (FCN) ensemble model (M-FCN) for segmenting brain regions of interest (ROIs) from structural magnetic resonance images (MRIs). One major limitation of existing state-of-the-art 3D FCN segmentation models is that they often apply image patches of fixed size throughout training and testing, which may miss some complex tissue appearance patterns of different brain ROIs. To address this limitation, we trained a 3D FCN model for each ROI using patches of adaptive size and embedded outputs of the convolutional layers in the deconvolutional layers to further capture the local and global context patterns. In addition, with an introduction of multi-atlas based guidance in M-FCN, our segmentation was generated by combining the information of images and labels, which is highly robust. To reduce over-fitting of the FCN model on the training data, we adopted an ensemble strategy in the learning procedure. Evaluation was performed on two brain MRI datasets, aiming respectively at segmenting 14 subcortical and ventricular structures and 54 brain ROIs. The segmentation results of the proposed method were compared with those of a state-of-the-art multi-atlas based segmentation method and an existing 3D FCN segmentation model. Our results suggested that the proposed method had a superior segmentation performance

    Novel Approaches to the Representation and Analysis of 3D Segmented Anatomical Districts

    Get PDF
    Nowadays, image processing and 3D shape analysis are an integral part of clinical practice and have the potentiality to support clinicians with advanced analysis and visualization techniques. Both approaches provide visual and quantitative information to medical practitioners, even if from different points of view. Indeed, shape analysis is aimed at studying the morphology of anatomical structures, while image processing is focused more on the tissue or functional information provided by the pixels/voxels intensities levels. Despite the progress obtained by research in both fields, a junction between these two complementary worlds is missing. When working with 3D models analyzing shape features, the information of the volume surrounding the structure is lost, since a segmentation process is needed to obtain the 3D shape model; however, the 3D nature of the anatomical structure is represented explicitly. With volume images, instead, the tissue information related to the imaged volume is the core of the analysis, while the shape and morphology of the structure are just implicitly represented, thus not clear enough. The aim of this Thesis work is the integration of these two approaches in order to increase the amount of information available for physicians, allowing a more accurate analysis of each patient. An augmented visualization tool able to provide information on both the anatomical structure shape and the surrounding volume through a hybrid representation, could reduce the gap between the two approaches and provide a more complete anatomical rendering of the subject. To this end, given a segmented anatomical district, we propose a novel mapping of volumetric data onto the segmented surface. The grey-levels of the image voxels are mapped through a volume-surface correspondence map, which defines a grey-level texture on the segmented surface. The resulting texture mapping is coherent to the local morphology of the segmented anatomical structure and provides an enhanced visual representation of the anatomical district. The integration of volume-based and surface-based information in a unique 3D representation also supports the identification and characterization of morphological landmarks and pathology evaluations. The main research contributions of the Ph.D. activities and Thesis are: \u2022 the development of a novel integration algorithm that combines surface-based (segmented 3D anatomical structure meshes) and volume-based (MRI volumes) information. The integration supports different criteria for the grey-levels mapping onto the segmented surface; \u2022 the development of methodological approaches for using the grey-levels mapping together with morphological analysis. The final goal is to solve problems in real clinical tasks, such as the identification of (patient-specific) ligament insertion sites on bones from segmented MR images, the characterization of the local morphology of bones/tissues, the early diagnosis, classification, and monitoring of muscle-skeletal pathologies; \u2022 the analysis of segmentation procedures, with a focus on the tissue classification process, in order to reduce operator dependency and to overcome the absence of a real gold standard for the evaluation of automatic segmentations; \u2022 the evaluation and comparison of (unsupervised) segmentation methods, finalized to define a novel segmentation method for low-field MR images, and for the local correction/improvement of a given segmentation. The proposed method is simple but effectively integrates information derived from medical image analysis and 3D shape analysis. Moreover, the algorithm is general enough to be applied to different anatomical districts independently of the segmentation method, imaging techniques (such as CT), or image resolution. The volume information can be integrated easily in different shape analysis applications, taking into consideration not only the morphology of the input shape but also the real context in which it is inserted, to solve clinical tasks. The results obtained by this combined analysis have been evaluated through statistical analysis

    Segmentation of brain MRI structures with deep machine learning

    Get PDF
    Several studies on brain Magnetic Resonance Images (MRI) show relations between neuroanatomical abnormalities of brain structures and neurological disorders, such as Attention De fficit Hyperactivity Disorder (ADHD) and Alzheimer. These abnormalities seem to be correlated with the size and shape of these structures, and there is an active fi eld of research trying to find accurate methods for automatic MRI segmentation. In this project, we study the automatic segmentation of structures from the Basal Ganglia and we propose a new methodology based on Stacked Sparse Autoencoders (SSAE). SSAE is a strategy that belongs to the family of Deep Machine Learning and consists on a supervised learning method based on an unsupervisely pretrained Feed-forward Neural Network. Moreover, we present two approaches based on 2D and 3D features of the images. We compare the results obtained on the di fferent regions of interest with those achieved by other machine learning techniques such as Neural Networks and Support Vector Machines. We observed that in most cases SSAE improves those other methods. We demonstrate that the 3D features do not report better results than the 2D ones as could be thought. Furthermore, we show that SSAE provides state-of-the-art Dice Coe fficient results (left, right): Caudate (90.6+-3 1.4, 90.31 +-1.7), Putamen (91.03 +-1.4, 90.82+- 1.4), Pallidus (85.11+-1.8, 83.47 +-2.2), Accumbens (74.26+- 4.4, 74.46 +-4.6)
    corecore