476 research outputs found

    CompNet: Complementary Segmentation Network for Brain MRI Extraction

    Full text link
    Brain extraction is a fundamental step for most brain imaging studies. In this paper, we investigate the problem of skull stripping and propose complementary segmentation networks (CompNets) to accurately extract the brain from T1-weighted MRI scans, for both normal and pathological brain images. The proposed networks are designed in the framework of encoder-decoder networks and have two pathways to learn features from both the brain tissue and its complementary part located outside of the brain. The complementary pathway extracts the features in the non-brain region and leads to a robust solution to brain extraction from MRIs with pathologies, which do not exist in our training dataset. We demonstrate the effectiveness of our networks by evaluating them on the OASIS dataset, resulting in the state of the art performance under the two-fold cross-validation setting. Moreover, the robustness of our networks is verified by testing on images with introduced pathologies and by showing its invariance to unseen brain pathologies. In addition, our complementary network design is general and can be extended to address other image segmentation problems with better generalization.Comment: 8 pages, Accepted to MICCAI 201

    GSplit LBI: Taming the Procedural Bias in Neuroimaging for Disease Prediction

    Full text link
    In voxel-based neuroimage analysis, lesion features have been the main focus in disease prediction due to their interpretability with respect to the related diseases. However, we observe that there exists another type of features introduced during the preprocessing steps and we call them "\textbf{Procedural Bias}". Besides, such bias can be leveraged to improve classification accuracy. Nevertheless, most existing models suffer from either under-fit without considering procedural bias or poor interpretability without differentiating such bias from lesion ones. In this paper, a novel dual-task algorithm namely \emph{GSplit LBI} is proposed to resolve this problem. By introducing an augmented variable enforced to be structural sparsity with a variable splitting term, the estimators for prediction and selecting lesion features can be optimized separately and mutually monitored by each other following an iterative scheme. Empirical experiments have been evaluated on the Alzheimer's Disease Neuroimaging Initiative\thinspace(ADNI) database. The advantage of proposed model is verified by improved stability of selected lesion features and better classification results.Comment: Conditional Accepted by Miccai,201

    INSIDE: Steering Spatial Attention with Non-Imaging Information in CNNs

    Get PDF
    We consider the problem of integrating non-imaging information into segmentation networks to improve performance. Conditioning layers such as FiLM provide the means to selectively amplify or suppress the contribution of different feature maps in a linear fashion. However, spatial dependency is difficult to learn within a convolutional paradigm. In this paper, we propose a mechanism to allow for spatial localisation conditioned on non-imaging information, using a feature-wise attention mechanism comprising a differentiable parametrised function (e.g. Gaussian), prior to applying the feature-wise modulation. We name our method INstance modulation with SpatIal DEpendency (INSIDE). The conditioning information might comprise any factors that relate to spatial or spatio-temporal information such as lesion location, size, and cardiac cycle phase. Our method can be trained end-to-end and does not require additional supervision. We evaluate the method on two datasets: a new CLEVR-Seg dataset where we segment objects based on location, and the ACDC dataset conditioned on cardiac phase and slice location within the volume. Code and the CLEVR-Seg dataset are available at https://github.com/jacenkow/inside.Comment: Accepted at International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 202

    Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE

    Full text link
    Probabilistic modelling has been an essential tool in medical image analysis, especially for analyzing brain Magnetic Resonance Images (MRI). Recent deep learning techniques for estimating high-dimensional distributions, in particular Variational Autoencoders (VAEs), opened up new avenues for probabilistic modeling. Modelling of volumetric data has remained a challenge, however, because constraints on available computation and training data make it difficult effectively leverage VAEs, which are well-developed for 2D images. We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices. We do so by estimating the sample mean and covariance in the latent space of the 2D model over the slice direction. This combined model lets us sample new coherent stacks of latent variables to decode into slices of a volume. We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy. We demonstrate that our proposed model is competitive in generating high quality volumes at high resolutions according to both traditional metrics and our proposed evaluation.Comment: accepted for publication at MICCAI 2020. Code available https://github.com/voanna/slices-to-3d-brain-vae

    Annotating Medical Image Data

    Get PDF

    PyElph - a software tool for gel images analysis and phylogenetics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper presents PyElph, a software tool which automatically extracts data from gel images, computes the molecular weights of the analyzed molecules or fragments, compares DNA patterns which result from experiments with molecular genetic markers and, also, generates phylogenetic trees computed by five clustering methods, using the information extracted from the analyzed gel image. The software can be successfully used for population genetics, phylogenetics, taxonomic studies and other applications which require gel image analysis. Researchers and students working in molecular biology and genetics would benefit greatly from the proposed software because it is free, open source, easy to use, has a friendly Graphical User Interface and does not depend on specific image acquisition devices like other commercial programs with similar functionalities do.</p> <p>Results</p> <p>PyElph software tool is entirely implemented in Python which is a very popular programming language among the bioinformatics community. It provides a very friendly Graphical User Interface which was designed in six steps that gradually lead to the results. The user is guided through the following steps: image loading and preparation, lane detection, band detection, molecular weights computation based on a molecular weight marker, band matching and finally, the computation and visualization of phylogenetic trees. A strong point of the software is the visualization component for the processed data. The Graphical User Interface provides operations for image manipulation and highlights lanes, bands and band matching in the analyzed gel image. All the data and images generated in each step can be saved. The software has been tested on several DNA patterns obtained from experiments with different genetic markers. Examples of genetic markers which can be analyzed using PyElph are RFLP (Restriction Fragment Length Polymorphism), AFLP (Amplified Fragment Length Polymorphism), RAPD (Random Amplification of Polymorphic DNA) and STR (Short Tandem Repeat). The similarity between the DNA sequences is computed and used to generate phylogenetic trees which are very useful for population genetics studies and taxonomic classification.</p> <p>Conclusions</p> <p>PyElph decreases the effort and time spent processing data from gel images by providing an automatic step-by-step gel image analysis system with a friendly Graphical User Interface. The proposed free software tool is suitable for researchers and students which do not have access to expensive commercial software and image acquisition devices.</p

    3D deep convolutional neural network-based ventilated lung segmentation using multi-nuclear hyperpolarized gas MRI

    Get PDF
    Hyperpolarized gas MRI enables visualization of regional lung ventilation with high spatial resolution. Segmentation of the ventilated lung is required to calculate clinically relevant biomarkers. Recent research in deep learning (DL) has shown promising results for numerous segmentation problems. In this work, we evaluate a 3D V-Net to segment ventilated lung regions on hyperpolarized gas MRI scans. The dataset consists of 743 helium-3 (3He) or xenon-129 (129Xe) volumetric scans and corresponding expert segmentations from 326 healthy subjects and patients with a wide range of pathologies. We evaluated segmentation performance for several DL experimental methods via overlap, distance and error metrics and compared them to conventional segmentation methods, namely, spatial fuzzy c-means (SFCM) and K-means clustering. We observed that training on combined 3He and 129Xe MRI scans outperformed other DL methods, achieving a mean ± SD Dice of 0.958 ± 0.022, average boundary Hausdorff distance of 2.22 ± 2.16 mm, Hausdorff 95th percentile of 8.53 ± 12.98 mm and relative error of 0.087 ± 0.049. Moreover, no difference in performance was observed between 129Xe and 3He scans in the testing set. Combined training on 129Xe and 3He yielded statistically significant improvements over the conventional methods (p < 0.0001). The DL approach evaluated provides accurate, robust and rapid segmentations of ventilated lung regions and successfully excludes non-lung regions such as the airways and noise artifacts and is expected to eliminate the need for, or significantly reduce, subsequent time-consuming manual editing

    White Matter, Gray Matter and Cerebrospinal Fluid Segmentation from Brain 3D MRI Using B-UNET

    Get PDF
    The accurate segmentation of brain tissues in Magnetic Resonance (MR) images is an important step for detection and treatment planning of brain diseases. Among other brain tissues, Gray Matter, White Matter and Cerebrospinal Fluid are commonly segmented for Alzheimer diagnosis purpose. Therefore, different algorithms for segmenting these tissues in MR image scans have been proposed over the years. Nowadays, with the trend of deep learning, many methods are trained to learn important features and extract information from the data leading to very promising segmentation results. In this work, we propose an effective approach to segment three tissues in 3D Brain MR images based on B-UNET. The method is implemented by using the Bitplane method in each convolution of the UNET model. We evaluated the proposed method using two public databases with very promising results. (c) Springer Nature Switzerland AG 2019

    Multidataset Incremental Training for Optic Disc Segmentation

    Get PDF
    When convolutional neural networks are applied to image segmentation results depend greatly on the data sets used to train the networks. Cloud providers support multi GPU and TPU virtual machines making the idea of cloud-based segmentation as service attractive. In this paper we study the problem of building a segmentation service, where images would come from different acquisition instruments, by training a generalized U-Net with images from a single or several datasets. We also study the possibility of training with a single instrument and perform quick retrains when more data is available. As our example we perform segmentation of Optic Disc in fundus images which is useful for glau coma diagnosis. We use two publicly available data sets (RIM-One V3, DRISHTI) for individual, mixed or incremental training. We show that multidataset or incremental training can produce results that are simi lar to those published by researchers who use the same dataset for both training and validation
    corecore