892 research outputs found

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    MFSNet: A Multi Focus Segmentation Network for Skin Lesion Segmentation

    Full text link
    Segmentation is essential for medical image analysis to identify and localize diseases, monitor morphological changes, and extract discriminative features for further diagnosis. Skin cancer is one of the most common types of cancer globally, and its early diagnosis is pivotal for the complete elimination of malignant tumors from the body. This research develops an Artificial Intelligence (AI) framework for supervised skin lesion segmentation employing the deep learning approach. The proposed framework, called MFSNet (Multi-Focus Segmentation Network), uses differently scaled feature maps for computing the final segmentation mask using raw input RGB images of skin lesions. In doing so, initially, the images are preprocessed to remove unwanted artifacts and noises. The MFSNet employs the Res2Net backbone, a recently proposed convolutional neural network (CNN), for obtaining deep features used in a Parallel Partial Decoder (PPD) module to get a global map of the segmentation mask. In different stages of the network, convolution features and multi-scale maps are used in two boundary attention (BA) modules and two reverse attention (RA) modules to generate the final segmentation output. MFSNet, when evaluated on three publicly available datasets: PH2PH^2, ISIC 2017, and HAM10000, outperforms state-of-the-art methods, justifying the reliability of the framework. The relevant codes for the proposed approach are accessible at https://github.com/Rohit-Kundu/MFSNe

    FCP-Net: A Feature-Compression-Pyramid Network Guided by Game-Theoretic Interactions for Medical Image Segmentation

    Get PDF
    Medical image segmentation is a crucial step in diagnosis and analysis of diseases for clinical applications. Deep neural network methods such as DeepLabv3+ have successfully been applied for medical image segmentation, but multi-level features are seldom integrated seamlessly into different attention mechanisms, and few studies have explored the interactions between medical image segmentation and classification tasks. Herein, we propose a feature-compression-pyramid network (FCP-Net) guided by game-theoretic interactions with a hybrid loss function (HLF) for the medical image segmentation. The proposed approach consists of segmentation branch, classification branch and interaction branch. In the encoding stage, a new strategy is developed for the segmentation branch by applying three modules, e.g., embedded feature ensemble, dilated spatial mapping and channel attention (DSMCA), and branch layer fusion. These modules allow effective extraction of spatial information, efficient identification of spatial correlation among various features, and fully integration of multireceptive field features from different branches. In the decoding stage, a DSMCA module and a multi-scale feature fusion module are used to establish multiple skip connections for enhancing fusion features. Classification and interaction branches are introduced to explore the potential benefits of the classification information task to the segmentation task. We further explore the interactions of segmentation and classification branches from a game theoretic view, and design an HLF. Based on this HLF, the segmentation, classification and interaction branches can collaboratively learn and teach each other throughout the training process, thus applying the conjoint information between the segmentation and classification tasks and improving the generalization performance. The proposed model has been evaluated using several datasets, including ISIC2017, ISIC2018, REFUGE, Kvasir-SEG, BUSI, and PH2, and the results prove its competitiveness compared with other state-of-the-art techniques

    Channel Attention Separable Convolution Network for Skin Lesion Segmentation

    Full text link
    Skin cancer is a frequently occurring cancer in the human population, and it is very important to be able to diagnose malignant tumors in the body early. Lesion segmentation is crucial for monitoring the morphological changes of skin lesions, extracting features to localize and identify diseases to assist doctors in early diagnosis. Manual de-segmentation of dermoscopic images is error-prone and time-consuming, thus there is a pressing demand for precise and automated segmentation algorithms. Inspired by advanced mechanisms such as U-Net, DenseNet, Separable Convolution, Channel Attention, and Atrous Spatial Pyramid Pooling (ASPP), we propose a novel network called Channel Attention Separable Convolution Network (CASCN) for skin lesions segmentation. The proposed CASCN is evaluated on the PH2 dataset with limited images. Without excessive pre-/post-processing of images, CASCN achieves state-of-the-art performance on the PH2 dataset with Dice similarity coefficient of 0.9461 and accuracy of 0.9645.Comment: Accepted by ICONIP 202

    DermaKNet: Incorporating the Knowledge of Dermatologists to Convolutional Neural Networks for Skin Lesion Diagnosis

    Get PDF
    Traditional approaches to automatic diagnosis of skin lesions consisted of classifiers working on sets of hand-crafted features, some of which modeled lesion aspects of special importance for dermatologists. Recently, the broad adoption of convolutional neural networks (CNNs) in most computer vision tasks has brought about a great leap forward in terms of performance. Nevertheless, with this performance leap, the CNN-based computer-aided diagnosis (CAD) systems have also brought a notable reduction of the useful insights provided by hand-crafted features. This paper presents DermaKNet, a CAD system based on CNNs that incorporates specific subsystems modeling properties of skin lesions that are of special interest to dermatologists aiming to improve the interpretability of its diagnosis. Our results prove that the incorporation of these subsystems not only improves the performance, but also enhances the diagnosis by providing more interpretable outputs.This work was supported in part by the National Grant TEC2014-53390-P and National Grant TEC2014-61729-EXP of the Spanish Ministry of Economy and Competitiveness, and in part by NVIDIA Corporation with the donation of the TITAN X GPUPublicad
    • …
    corecore