401 research outputs found

    Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks

    Full text link
    A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.Comment: 12 pages, 5 figures. MICCAI Brats Challenge 201

    Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy

    Get PDF
    In recent years, endomicroscopy has become increasingly used for diagnostic purposes and interventional guidance. It can provide intraoperative aids for real-time tissue characterization and can help to perform visual investigations aimed for example to discover epithelial cancers. Due to physical constraints on the acquisition process, endomicroscopy images, still today have a low number of informative pixels which hampers their quality. Post-processing techniques, such as Super-Resolution (SR), are a potential solution to increase the quality of these images. SR techniques are often supervised, requiring aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to train a model. However, in our domain, the lack of HR images hinders the collection of such pairs and makes supervised training unsuitable. For this reason, we propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency, designed to impose some acquisition properties on the super-resolved images. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. This property can be particularly useful in all situations where pairs of LR/HR are not available during the training. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment.Comment: Accepted for publication on Medical Image Analysis journa

    Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations

    Get PDF
    Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks

    ECONet: Efficient Convolutional Online Likelihood Network for Scribble-based Interactive Segmentation

    Full text link
    Automatic segmentation of lung lesions associated with COVID-19 in CT images requires large amount of annotated volumes. Annotations mandate expert knowledge and are time-intensive to obtain through fully manual segmentation methods. Additionally, lung lesions have large inter-patient variations, with some pathologies having similar visual appearance as healthy lung tissues. This poses a challenge when applying existing semi-automatic interactive segmentation techniques for data labelling. To address these challenges, we propose an efficient convolutional neural networks (CNNs) that can be learned online while the annotator provides scribble-based interaction. To accelerate learning from only the samples labelled through user-interactions, a patch-based approach is used for training the network. Moreover, we use weighted cross-entropy loss to address the class imbalance that may result from user-interactions. During online inference, the learned network is applied to the whole input volume using a fully convolutional approach. We compare our proposed method with state-of-the-art using synthetic scribbles and show that it outperforms existing methods on the task of annotating lung lesions associated with COVID-19, achieving 16% higher Dice score while reducing execution time by 3Ă—\times and requiring 9000 lesser scribbles-based labelled voxels. Due to the online learning aspect, our approach adapts quickly to user input, resulting in high quality segmentation labels. Source code for ECONet is available at: https://github.com/masadcv/ECONet-MONAILabel.Comment: Accepted at MIDL 202

    An Unsupervised Approach to Ultrasound Elastography with End-to-end Strain Regularisation

    Get PDF
    Quasi-static ultrasound elastography (USE) is an imaging modality that consists of determining a measure of deformation (i.e.strain) of soft tissue in response to an applied mechanical force. The strain is generally determined by estimating the displacement between successive ultrasound frames acquired before and after applying manual compression. The computational efficiency and accuracy of the displacement prediction, also known as time-delay estimation, are key challenges for real-time USE applications. In this paper, we present a novel deep-learning method for efficient time-delay estimation between ultrasound radio-frequency (RF) data. The proposed method consists of a convolutional neural network (CNN) that predicts a displacement field between a pair of pre- and post-compression ultrasound RF frames. The network is trained in an unsupervised way, by optimizing a similarity metric be-tween the reference and compressed image. We also introduce a new regularization term that preserves displacement continuity by directly optimizing the strain smoothness. We validated the performance of our method by using both ultrasound simulation and in vivo data on healthy volunteers. We also compared the performance of our method with a state-of-the-art method called OVERWIND [17]. Average contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of our method in 30 simulation and 3 in vivo image pairs are 7.70 and 6.95, 7 and 0.31, respectively. Our results suggest that our approach can effectively predict accurate strain images. The unsupervised aspect of our approach represents a great potential for the use of deep learning application for the analysis of clinical ultrasound data.Comment: Accepted at MICCAI 202
    • …
    corecore