303 research outputs found

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Computer-Aided Diagnosis for Melanoma using Ontology and Deep Learning Approaches

    Get PDF
    The emergence of deep-learning algorithms provides great potential to enhance the prediction performance of computer-aided supporting diagnosis systems. Recent research efforts indicated that well-trained algorithms could achieve the accuracy level of experienced senior clinicians in the Dermatology field. However, the lack of interpretability and transparency hinders the algorithms’ utility in real-life. Physicians and patients require a certain level of interpretability for them to accept and trust the results. Another limitation of AI algorithms is the lack of consideration of other information related to the disease diagnosis, for example some typical dermoscopic features and diagnostic guidelines. Clinical guidelines for skin disease diagnosis are designed based on dermoscopic features. However, a structured and standard representation of the relevant knowledge in the skin disease domain is lacking. To address the above challenges, this dissertation builds an ontology capable of formally representing the knowledge of dermoscopic features and develops an explainable deep learning model able to diagnose skin diseases and dermoscopic features. Additionally, large-scale, unlabeled datasets can learn from the trained model and automate the feature generation process. The computer vision aided feature extraction algorithms are combined with the deep learning model to improve the overall classification accuracy and save manual annotation efforts

    On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors

    Full text link
    Deep learning based medical image classifiers have shown remarkable prowess in various application areas like ophthalmology, dermatology, pathology, and radiology. However, the acceptance of these Computer-Aided Diagnosis (CAD) systems in real clinical setups is severely limited primarily because their decision-making process remains largely obscure. This work aims at elucidating a deep learning based medical image classifier by verifying that the model learns and utilizes similar disease-related concepts as described and employed by dermatologists. We used a well-trained and high performing neural network developed by REasoning for COmplex Data (RECOD) Lab for classification of three skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis and performed a detailed analysis on its latent space. Two well established and publicly available skin disease datasets, PH2 and derm7pt, are used for experimentation. Human understandable concepts are mapped to RECOD image classification model with the help of Concept Activation Vectors (CAVs), introducing a novel training and significance testing paradigm for CAVs. Our results on an independent evaluation set clearly shows that the classifier learns and encodes human understandable concepts in its latent representation. Additionally, TCAV scores (Testing with CAVs) suggest that the neural network indeed makes use of disease-related concepts in the correct way when making predictions. We anticipate that this work can not only increase confidence of medical practitioners on CAD but also serve as a stepping stone for further development of CAV-based neural network interpretation methods.Comment: Accepted for the IEEE International Joint Conference on Neural Networks (IJCNN) 202

    melNET: A Deep Learning Based Model For Melanoma Detection

    Get PDF
    Melanoma is identified as the deadliest in the skin cancer category. However, early-stage detection may enhance the treatment result. In this research, a deep learning-based model, named “melNET”, has been developed to detect melanoma in both dermoscopic and digital images. melNET uses the Inception-v3 architecture to handle the deep learning part. To ensure quality optimization, the architectural aspects of Inception-v3 were designed using the Hebbian principle as well as taking the intuition of multi-scale processing. This architecture takes advantage of parallel computing across multiple GPUs to employ RMSprop as the optimizer. While going through the training phase, melNET uses the back-propagation method to retrain this Inception-v3 network by feeding the errors from each iteration, resulting in the fine-tuning of network weights. After the completion of the training step, melNET can be used to predict the diagnosis of a mole by taking the lesion image as an input to the system. With a dermoscopic dataset of 200 images, provided by PH2, melNET outperforms the work with YOLO-v2 network by improving the sensitivity value from 86.35% to 97.50%. Also, the specificity and accuracy values are found to be improved from 85.90% to 87.50%, and, from 86.00% to 89.50% respectively. melNET has also been evaluated on a digital dataset of 170 images, provided by UMCG, showing an accuracy of 84.71%, which outperforms the 81.00% accuracy of the MED-NODE model. In both cases, melNET got treated as a binary classifier and a five-fold cross validation method was applied for the evaluation. In addition, melNET has been found to perform the detections in real-time by leveraging the end-to-end Inception-v3 architecture

    Contributions to the segmentation of dermoscopic images

    Get PDF
    Tese de mestrado. Mestrado em Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 201
    • …
    corecore