543 research outputs found

    3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes

    Full text link
    While deep convolutional neural networks (CNN) have been successfully applied for 2D image analysis, it is still challenging to apply them to 3D anisotropic volumes, especially when the within-slice resolution is much higher than the between-slice resolution and when the amount of 3D volumes is relatively small. On one hand, direct learning of CNN with 3D convolution kernels suffers from the lack of data and likely ends up with poor generalization; insufficient GPU memory limits the model size or representational power. On the other hand, applying 2D CNN with generalizable features to 2D slices ignores between-slice information. Coupling 2D network with LSTM to further handle the between-slice information is not optimal due to the difficulty in LSTM learning. To overcome the above challenges, we propose a 3D Anisotropic Hybrid Network (AH-Net) that transfers convolutional features learned from 2D images to 3D anisotropic volumes. Such a transfer inherits the desired strong generalization capability for within-slice information while naturally exploiting between-slice information for more effective modelling. The focal loss is further utilized for more effective end-to-end learning. We experiment with the proposed 3D AH-Net on two different medical image analysis tasks, namely lesion detection from a Digital Breast Tomosynthesis volume, and liver and liver tumor segmentation from a Computed Tomography volume and obtain the state-of-the-art results

    A Novel Sep-Unet Architecture of Convolutional Neural Networks to Improve Dermoscopic Image Segmentation by Training Parameters Reduction

    Get PDF
    Nowadays, we use dermoscopic images as one of the imaging methods in diagnosis of skin lesions such as skin cancer. But due to the noise and other problems, including hair artifacts around the lesion, this issue requires automatic and reliable segmentation methods. The diversity in the color and structure of the skin lesions is a challenging reason for automatic skin lesion segmentation. In this study, we used convolutional neural networks (CNN) as an efficient method for dermoscopic image segmentation. The main goal of this research is to recommend a novel architecture of deep neural networks for the injured lesion in dermoscopic images which has been improved by the convolutional layers based on the separable layers. By convolutional layers and the specific operations on the kernel of them, the velocity of the algorithm increases and the training parameters decrease. Additionally, we used a suitable preprocessing method to enter the images into the neural network. Suitable structure of the convolutional layers, separable convolutional layers and transposed convolution in the down sampling and up sampling parts, have made the structure of the mentioned neural network. This algorithm is named Sep-unet and could segment the images with 98% dice coefficient

    Automated skin lesion segmentation using multi-scale feature extraction scheme and dual-attention mechanism

    Full text link
    Segmenting skin lesions from dermoscopic images is essential for diagnosing skin cancer. But the automatic segmentation of these lesions is complicated due to the poor contrast between the background and the lesion, image artifacts, and unclear lesion boundaries. In this work, we present a deep learning model for the segmentation of skin lesions from dermoscopic images. To deal with the challenges of skin lesion characteristics, we designed a multi-scale feature extraction module for extracting the discriminative features. Further in this work, two attention mechanisms are developed to refine the post-upsampled features and the features extracted by the encoder. This model is evaluated using the ISIC2018 and ISBI2017 datasets. The proposed model outperformed all the existing works and the top-ranked models in two competitions

    Melanoma segmentation using deep learning with test-time augmentations and conditional random fields

    Get PDF
    In a computer-aided diagnostic (CAD) system for skin lesion segmentation, variations in shape and size of the skin lesion makes the segmentation task more challenging. Lesion segmentation is an initial step in CAD schemes as it leads to low error rates in quantification of the structure, boundary, and scale of the skin lesion. Subjective clinical assessment of the skin lesion segmentation results provided by current state-of-the-art deep learning segmentation techniques does not offer the required results as per the inter-observer agreement of expert dermatologists. This study proposes a novel deep learning-based, fully automated approach to skin lesion segmentation, including sophisticated pre and postprocessing approaches. We use three deep learning models, including UNet, deep residual U-Net (ResUNet), and improved ResUNet (ResUNet++). The preprocessing phase combines morphological filters with an inpainting algorithm to eliminate unnecessary hair structures from the dermoscopic images. Finally, we used test time augmentation (TTA) and conditional random field (CRF) in the postprocessing stage to improve segmentation accuracy. The proposed method was trained and evaluated on ISIC-2016 and ISIC-2017 skin lesion datasets. It achieved an average Jaccard Index of 85.96% and 80.05% for ISIC-2016 and ISIC-2017 datasets, when trained individually. When trained on combined dataset (ISIC-2016 and ISIC-2017), the proposed method achieved an average Jaccard Index of 80.73% and 90.02% on ISIC-2017 and ISIC-2016 testing datasets. The proposed methodological framework can be used to design a fully automated computer-aided skin lesion diagnostic system due to its high scalability and robustness
    • …
    corecore