7,236 research outputs found

    Approximate Lesion Localization in Dermoscopy Images

    Full text link
    Background: Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, automated analysis of dermoscopy images has become an important research area. Border detection is often the first step in this analysis. Methods: In this article, we present an approximate lesion localization method that serves as a preprocessing step for detecting borders in dermoscopy images. In this method, first the black frame around the image is removed using an iterative algorithm. The approximate location of the lesion is then determined using an ensemble of thresholding algorithms. Results: The method is tested on a set of 428 dermoscopy images. The localization error is quantified by a metric that uses dermatologist determined borders as the ground truth. Conclusion: The results demonstrate that the method presented here achieves both fast and accurate localization of lesions in dermoscopy images

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Automatic Skin Cancer Detection in Dermoscopy Images Based on Ensemble Lightweight Deep Learning Network

    Get PDF
    The complex detection background and lesion features make the automatic detection of dermoscopy image lesions face many challenges. The previous solutions mainly focus on using larger and more complex models to improve the accuracy of detection, there is a lack of research on significant intra-class differences and inter-class similarity of lesion features. At the same time, the larger model size also brings challenges to further algorithm application; In this paper, we proposed a lightweight skin cancer recognition model with feature discrimination based on fine-grained classification principle. The propose model includes two common feature extraction modules of lesion classification network and a feature discrimination network. Firstly, two sets of training samples (positive and negative sample pairs) are input into the feature extraction module (Lightweight CNN) of the recognition model. Then, two sets of feature vectors output from the feature extraction module are used to train the two classification networks and feature discrimination networks of the recognition model at the same time, and the model fusion strategy is applied to further improve the performance of the model, the proposed recognition method can extract more discriminative lesion features and improve the recognition performance of the model in a small amount of model parameters; In addition, based on the feature extraction module of the proposed recognition model, U-Net architecture, and migration training strategy, we build a lightweight semantic segmentation model of lesion area of dermoscopy image, which can achieve high precision lesion area segmentation end-to-end without complicated image preprocessing operation; The performance of our approach was appraised through widespread experiments comparative and feature visualization analysis, the outcome indicates that the proposed method has better performance than the start-of-the-art deep learning-based approach on the ISBI 2016 skin lesion analysis towards melanoma detection challenge dataset
    • …
    corecore