277 research outputs found

    Automatic cerebral hemisphere segmentation in rat MRI with lesions via attention-based convolutional neural networks

    Full text link
    We present MedicDeepLabv3+, a convolutional neural network that is the first completely automatic method to segment cerebral hemispheres in magnetic resonance (MR) volumes of rats with lesions. MedicDeepLabv3+ improves the state-of-the-art DeepLabv3+ with an advanced decoder, incorporating spatial attention layers and additional skip connections that, as we show in our experiments, lead to more precise segmentations. MedicDeepLabv3+ requires no MR image preprocessing, such as bias-field correction or registration to a template, produces segmentations in less than a second, and its GPU memory requirements can be adjusted based on the available resources. We optimized MedicDeepLabv3+ and six other state-of-the-art convolutional neural networks (DeepLabv3+, UNet, HighRes3DNet, V-Net, VoxResNet, Demon) on a heterogeneous training set comprised by MR volumes from 11 cohorts acquired at different lesion stages. Then, we evaluated the trained models and two approaches specifically designed for rodent MRI skull stripping (RATS and RBET) on a large dataset of 655 MR rat brain volumes. In our experiments, MedicDeepLabv3+ outperformed the other methods, yielding an average Dice coefficient of 0.952 and 0.944 in the brain and contralateral hemisphere regions. Additionally, we show that despite limiting the GPU memory and the training data, our MedicDeepLabv3+ also provided satisfactory segmentations. In conclusion, our method, publicly available at https://github.com/jmlipman/MedicDeepLabv3Plus, yielded excellent results in multiple scenarios, demonstrating its capability to reduce human workload in rat neuroimaging studies.Comment: Published in NeuroInformatic

    Simultaneous lesion and neuroanatomy segmentation in Multiple Sclerosis using deep neural networks

    Get PDF
    Segmentation of both white matter lesions and deep grey matter structures is an important task in the quantification of magnetic resonance imaging in multiple sclerosis. Typically these tasks are performed separately: in this paper we present a single segmentation solution based on convolutional neural networks (CNNs) for providing fast, reliable segmentations of multimodal magnetic resonance images into lesion classes and normal-appearing grey- and white-matter structures. We show substantial, statistically significant improvements in both Dice coefficient and in lesion-wise specificity and sensitivity, compared to previous approaches, and agreement with individual human raters in the range of human inter-rater variability. The method is trained on data gathered from a single centre: nonetheless, it performs well on data from centres, scanners and field-strengths not represented in the training dataset. A retrospective study found that the classifier successfully identified lesions missed by the human raters. Lesion labels were provided by human raters, while weak labels for other brain structures (including CSF, cortical grey matter, cortical white matter, cerebellum, amygdala, hippocampus, subcortical GM structures and choroid plexus) were provided by Freesurfer 5.3. The segmentations of these structures compared well, not only with Freesurfer 5.3, but also with FSL-First and Freesurfer 6.0

    A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network

    Get PDF
    Producción CientíficaIn this paper, we present a fully automatic brain tumor segmentation and classification model using a Deep Convolutional Neural Network that includes a multiscale approach. One of the differences of our proposal with respect to previous works is that input images are processed in three spatial scales along different processing pathways. This mechanism is inspired in the inherent operation of the Human Visual System. The proposed neural model can analyze MRI images containing three types of tumors: meningioma, glioma, and pituitary tumor, over sagittal, coronal, and axial views and does not need preprocessing of input images to remove skull or vertebral column parts in advance. The performance of our method on a publicly available MRI image dataset of 3064 slices from 233 patients is compared with previously classical machine learning and deep learning published methods. In the comparison, our method remarkably obtained a tumor classification accuracy of 0.973, higher than the other approaches using the same databas

    Convolutional neural networks for the segmentation of small rodent brain MRI

    Get PDF
    Image segmentation is a common step in the analysis of preclinical brain MRI, often performed manually. This is a time-consuming procedure subject to inter- and intra- rater variability. A possible alternative is the use of automated, registration-based segmentation, which suffers from a bias owed to the limited capacity of registration to adapt to pathological conditions such as Traumatic Brain Injury (TBI). In this work a novel method is developed for the segmentation of small rodent brain MRI based on Convolutional Neural Networks (CNNs). The experiments here presented show how CNNs provide a fast, robust and accurate alternative to both manual and registration-based methods. This is demonstrated by accurately segmenting three large datasets of MRI scans of healthy and Huntington disease model mice, as well as TBI rats. MU-Net and MU-Net-R, the CCNs here presented, achieve human-level accuracy while eliminating intra-rater variability, alleviating the biases of registration-based segmentation, and with an inference time of less than one second per scan. Using these segmentation masks I designed a geometric construction to extract 39 parameters describing the position and orientation of the hippocampus, and later used them to classify epileptic vs. non-epileptic rats with a balanced accuracy of 0.80, five months after TBI. This clinically transferable geometric approach detects subjects at high-risk of post-traumatic epilepsy, paving the way towards subject stratification for antiepileptogenesis studies

    Informative sample generation using class aware generative adversarial networks for classification of chest Xrays

    Full text link
    Training robust deep learning (DL) systems for disease detection from medical images is challenging due to limited images covering different disease types and severity. The problem is especially acute, where there is a severe class imbalance. We propose an active learning (AL) framework to select most informative samples for training our model using a Bayesian neural network. Informative samples are then used within a novel class aware generative adversarial network (CAGAN) to generate realistic chest xray images for data augmentation by transferring characteristics from one class label to another. Experiments show our proposed AL framework is able to achieve state-of-the-art performance by using about 35%35\% of the full dataset, thus saving significant time and effort over conventional methods

    Brain Tumor Detection and Classification from MRI Images

    Get PDF
    A brain tumor is detected and classified by biopsy that is conducted after the brain surgery. Advancement in technology and machine learning techniques could help radiologists in the diagnosis of tumors without any invasive measures. We utilized a deep learning-based approach to detect and classify the tumor into Meningioma, Glioma, Pituitary tumors. We used registration and segmentation-based skull stripping mechanism to remove the skull from the MRI images and the grab cut method to verify whether the skull stripped MRI masks retained the features of the tumor for accurate classification. In this research, we proposed a transfer learning based approach in conjunction with discriminative learning rates to perform the classification of brain tumors. The data set used is a 3064 T MRI images dataset that contains T1 flair MRI images. We achieved a classification accuracy of 98.83%, 96.26%, and 95.18% for training, validation, and test sets and an F1 score of 0.96 on the T1 Flair MRI dataset

    Magnetic Resonance Image segmentation using Pulse Coupled Neural Networks

    Get PDF
    The Pulse Couple Neural Network (PCNN) was developed by Eckhorn to model the observed synchronization of neural assemblies in the visual cortex of small mammals such as a cat. In this dissertation, three novel PCNN based automatic segmentation algorithms were developed to segment Magnetic Resonance Imaging (MRI) data: (a) PCNN image \u27signature\u27 based single region cropping; (b) PCNN - Kittler Illingworth minimum error thresholding and (c) PCNN -Gaussian Mixture Model - Expectation Maximization (GMM-EM) based multiple material segmentation. Among other control tests, the proposed algorithms were tested on three T2 weighted acquisition configurations comprising a total of 42 rat brain volumes, 20 T1 weighted MR human brain volumes from Harvard\u27s Internet Brain Segmentation Repository and 5 human MR breast volumes. The results were compared against manually segmented gold standards, Brain Extraction Tool (BET) V2.1 results, published results and single threshold methods. The Jaccard similarity index was used for numerical evaluation of the proposed algorithms. Our quantitative results demonstrate conclusively that PCNN based multiple material segmentation strategies can approach a human eye\u27s intensity delineation capability in grayscale image segmentation tasks

    Skull Stripping Based on the Segmentation Models

    Get PDF
    Skull image separation is one of the initial procedures used to detect brain abnormalities. In an MRI image of the brain, this process involves distinguishing the tissue that makes up the brain from the tissue that does not make up the brain. Even for experienced radiologists, separating the brain from the skull is a difficult task, and the accuracy of the results can vary quite a little from one individual to the next. Therefore, skull stripping in brain magnetic resonance volume has become increasingly popular due to the requirement for a dependable, accurate, and thorough method for processing brain datasets. Furthermore, skull stripping must be performed accurately for neuroimaging diagnostic systems since neither non-brain tissues nor the removal of brain sections can be addressed in the subsequent steps, resulting in an unfixed mistake during further analysis. Therefore, accurate skull stripping is necessary for neuroimaging diagnostic systems. This paper proposes a system based on deep learning and Image processing, an innovative method for converting a pre-trained model into another type of pre-trainer using pre-processing operations and the CLAHE filter as a critical phase. The global IBSR data set was used as a test and training set. For the system's efficacy, work was performed based on the principle of three dimensions and three sections of MR images and two-dimensional images, and the results were 99.9% accurate
    • …
    corecore