6,348 research outputs found
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
Recommended from our members
Detection of pigment network in dermoscopy images
One of the most important structures in dermoscopy images is the pigment network, which is also one of the most challenging and fundamental task for dermatologists in early detection of melanoma. This paper presents an automatic system to detect pigment network from dermoscopy images. The design of the proposed algorithm consists of four stages. First, a pre-processing algorithm is carried out in order to remove the noise and improve the quality of the image. Second, a bank of directional filters and morphological connected component analysis are applied to detect the pigment networks. Third, features are extracted from the detected image, which can be used in the subsequent stage. Fourth, the classification process is performed by applying feed-forward neural network, in order to classify the region as either normal or abnormal skin. The method was tested on a dataset of 200 dermoscopy images from Hospital Pedro Hispano (Matosinhos), and better results were produced compared to previous studies
Automatic Detection of Critical Dermoscopy Features for Malignant Melanoma Diagnosis
Improved methods for computer-aided analysis of identifying features of skin lesions from digital images of the lesions are provided. Improved preprocessing of the image that 1) eliminates artifacts that occlude or distort skin lesion features and 2) identifies groups of pixels within the skin lesion that represent features and/or facilitate the quantification of features are provided including improved digital hair removal algorithms. Improved methods for analyzing lesion features are also provided
Detection of Solid Pigment in Dermatoscopy Images using Texture Analysis
Background/aims: Epiluminescence microscopy (ELM), also known as dermoscopy or dermatoscopy, is a non-invasive, in vivo technique, that permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. ELM offers a completely new range of visual features. One such feature is the solid pigment, also called the blotchy pigment or dark structureless area. Our goal was to automatically detect this feature and determine whether its presence is useful in distinguishing benign from malignant pigmented lesions.
Methods: Here, a texture-based algorithm is developed for the detection of solid pigment. The factors d and a used in calculating neighboring gray level dependence matrix (NGLDM) numbers were chosen as optimum by experimentation. The algorithms are tested on a set of 37 images. A new index is presented for separation of benign and malignant lesions, based on the presence of solid pigment in the periphery.
Results: The NGLDM large number emphasis N2 was satisfactory for the detection of the solid pigment. Nine lesions had solid pigment detected, and among our 37 lesions, no melanoma lacked solid pigment. The index for separation of benign and malignant lesions was applied to the nine lesions. We were able to separate the benign lesions with solid pigment from the malignant lesions with the exception of only one lesion, a Spitz nevus that mimicked a malignant melanoma.
Conclusion: Texture methods may be useful in detecting important dermatoscopy features in digitized images and a new index may be useful in separating benign from malignant lesions. Testing on a larger set of lesions is needed before further conclusions can be made
Graph-Based Intercategory and Intermodality Network for Multilabel Classification and Melanoma Diagnosis of Skin Lesions in Dermoscopy and Clinical Images
The identification of melanoma involves an integrated analysis of skin lesion
images acquired using the clinical and dermoscopy modalities. Dermoscopic
images provide a detailed view of the subsurface visual structures that
supplement the macroscopic clinical images. Melanoma diagnosis is commonly
based on the 7-point visual category checklist (7PC). The 7PC contains
intrinsic relationships between categories that can aid classification, such as
shared features, correlations, and the contributions of categories towards
diagnosis. Manual classification is subjective and prone to intra- and
interobserver variability. This presents an opportunity for automated methods
to improve diagnosis. Current state-of-the-art methods focus on a single image
modality and ignore information from the other, or do not fully leverage the
complementary information from both modalities. Further, there is not a method
to exploit the intercategory relationships in the 7PC. In this study, we
address these issues by proposing a graph-based intercategory and intermodality
network (GIIN) with two modules. A graph-based relational module (GRM)
leverages intercategorical relations, intermodal relations, and prioritises the
visual structure details from dermoscopy by encoding category representations
in a graph network. The category embedding learning module (CELM) captures
representations that are specialised for each category and support the GRM. We
show that our modules are effective at enhancing classification performance
using a public dataset of dermoscopy-clinical images, and show that our method
outperforms the state-of-the-art at classifying the 7PC categories and
diagnosis
Graph-Ensemble Learning Model for Multi-label Skin Lesion Classification using Dermoscopy and Clinical Images
Many skin lesion analysis (SLA) methods recently focused on developing a
multi-modal-based multi-label classification method due to two factors. The
first is multi-modal data, i.e., clinical and dermoscopy images, which can
provide complementary information to obtain more accurate results than
single-modal data. The second one is that multi-label classification, i.e.,
seven-point checklist (SPC) criteria as an auxiliary classification task can
not only boost the diagnostic accuracy of melanoma in the deep learning (DL)
pipeline but also provide more useful functions to the clinical doctor as it is
commonly used in clinical dermatologist's diagnosis. However, most methods only
focus on designing a better module for multi-modal data fusion; few methods
explore utilizing the label correlation between SPC and skin disease for
performance improvement. This study fills the gap that introduces a Graph
Convolution Network (GCN) to exploit prior co-occurrence between each category
as a correlation matrix into the DL model for the multi-label classification.
However, directly applying GCN degraded the performances in our experiments; we
attribute this to the weak generalization ability of GCN in the scenario of
insufficient statistical samples of medical data. We tackle this issue by
proposing a Graph-Ensemble Learning Model (GELN) that views the prediction from
GCN as complementary information of the predictions from the fusion model and
adaptively fuses them by a weighted averaging scheme, which can utilize the
valuable information from GCN while avoiding its negative influences as much as
possible. To evaluate our method, we conduct experiments on public datasets.
The results illustrate that our GELN can consistently improve the
classification performance on different datasets and that the proposed method
can achieve state-of-the-art performance in SPC and diagnosis classification.Comment: Submitted to TNNLS in 1st July 202
- …