16 research outputs found

    Deep Ensembles for Semantic Segmentation on Road Detection

    Get PDF
    Postprin

    Infant’s MRI Brain Tissue Segmentation using Integrated CNN Feature Extractor and Random Forest

    Get PDF
    Infant MRI brain soft tissue segmentation become more difficult task compare with adult MRI brain tissue segmentation, due to Infant’s brain have a very low Signal to noise ratio among the white matter_WM and the gray matter _GM. Due the fast improvement of the overall brain at this time , the overall shape and appearance of the brain differs significantly. Manual segmentation of anomalous tissues is time-consuming and unpleasant. Essential Feature extraction in traditional machine algorithm is based on experts, required prior knowledge and also system sensitivity has change. Recently, bio-medical image segmentation based on deep learning has presented significant potential in becoming an important element of the clinical assessment process. Inspired by the mentioned objective, we introduce a methodology for analysing infant image in order to appropriately segment tissue of infant MRI images. In this paper, we integrated random forest classifier along with deep convolutional neural networks (CNN) for segmentation of infants MRI of Iseg 2017 dataset. We segmented infants MRI brain images into such as WM- white matter, GM-gray matter and CSF-cerebrospinal fluid tissues, the obtained result show that the recommended integrated CNN-RF method outperforms and archives a superior DSC-Dice similarity coefficient, MHD-Modified Hausdorff distance and ASD-Average surface distance for respective segmented tissue of infants brain MRI

    MRI white matter lesion segmentation using an ensemble of neural networks and overcomplete patch-based voting

    Full text link
    [EN] Accurate quantification of white matter hyperintensities (WMH) from Magnetic Resonance Imaging (MRI) is a valuable tool for the analysis of normal brain ageing or neurodegeneration. Reliable automatic extraction of WMH lesions is challenging due to their heterogeneous spatial occurrence, their small size and their diffuse nature. In this paper, we present an automatic method to segment these lesions based on an ensemble of overcomplete patch-based neural networks. The proposed method successfully provides accurate and regular segmentations due to its overcomplete nature while minimizing the segmentation error by using a boosted ensemble of neural networks. The proposed method compared favourably to state of the art techniques using two different neurodegenerative datasets. (C) 2018 Elsevier Ltd. All rights reserved.This research has been done thanks to the Australian distinguished visiting professor grant from the CSIRO (Commonwealth Scientific and Industrial Research Organisation) and the Spanish "Programa de apoyo a la investigacion y desarrollo (PAID-00-15)" of the Universidad Politecnica de Valencia. This research was partially supported by the Spanish grant TIN2013-43457-R from the Ministerio de Economia y competitividad. This study has been carried out also with support from the French State, managed by the French National Research Ageny in the frame of the Investments for the future Program IdEx Bordeaux (ANR-10-IDEX-03-02, HL-MRI Project), Cluster of excellence CPU and TRAIL (HR-DTI ANR-10-LABX-57) and the CNRS multidisciplinary project Defi imag'In. Some of the data used in this work was collected by the AIBL study group. Funding for the AIBL study is provided by the CSIRO Flagship Collaboration Fund and the Science and Industry Endowment Fund (SIEF) in partnership with Edith Cowan University (ECU), Mental Health Research Institute (MHRI), Alzheimer's Australia (AA), National Ageing Research Institute (NARI), Austin Health, Macquarie University, CogState Ltd, Hollywood Private Hospital, and Sir Charles Gairdner Hospital.Manjón Herrera, JV.; Coupe, P.; Raniga, P.; Xia, Y.; Desmond, P.; Fripp, J.; Salvado, O. (2018). MRI white matter lesion segmentation using an ensemble of neural networks and overcomplete patch-based voting. Computerized Medical Imaging and Graphics. 69:43-51. https://doi.org/10.1016/j.compmedimag.2018.05.001S43516

    HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation

    Full text link
    Recently, dense connections have attracted substantial attention in computer vision because they facilitate gradient flow and implicit deep supervision during training. Particularly, DenseNet, which connects each layer to every other layer in a feed-forward fashion, has shown impressive performances in natural image classification tasks. We propose HyperDenseNet, a 3D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path, but also between those across different paths. This contrasts with the existing multi-modal CNN approaches, in which modeling several modalities relies entirely on a single joint layer (or level of abstraction) for fusion, typically either at the input or at the output of the network. Therefore, the proposed network has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation. We report extensive evaluations over two different and highly competitive multi-modal brain tissue segmentation challenges, iSEG 2017 and MRBrainS 2013, with the former focusing on 6-month infant data and the latter on adult images. HyperDenseNet yielded significant improvements over many state-of-the-art segmentation networks, ranking at the top on both benchmarks. We further provide a comprehensive experimental analysis of features re-use, which confirms the importance of hyper-dense connections in multi-modal representation learning. Our code is publicly available at https://www.github.com/josedolz/HyperDenseNet.Comment: Paper accepted at IEEE TMI in October 2018. Last version of this paper updates the reference to the IEEE TMI paper which compares the submissions to the iSEG 2017 MICCAI Challeng
    corecore