3,697 research outputs found

    Content-Based Image Retrieval of Skin Lesions by Evolutionary Feature Synthesis

    Get PDF
    Abstract. This paper gives an example of evolved features that improve image retrieval performance. A content-based image retrieval system for skin lesion images is presented. The aim is to support decision making by retrieving and displaying relevant past cases visually similar to the one under examination. Skin lesions of five common classes, including two non-melanoma cancer types, are used. Colour and texture features are extracted from lesions. Evolutionary algorithms are used to create composite features that optimise a similarity matching function. Experiments on our database of 533 images are performed and results are compared to those obtained using simple features. The use of the evolved composite features improves the precision by about 7%.

    Detection and Classification Techniques for Skin Lesion Images: A Review

    Get PDF
    Dermoscopy needs sophisticated and robust systems for successful treatment which would also help reduce the number of biopsies. Computer aided diagnosis of melanoma support clinical decision making which would provide relevant supporting evidence from the prior known cases to the dermatologists and practitioners and also ease the management of clinical data. These systems play an important role of an expert consultant by presenting cases that are not only similar in diagnosis but also similar in appearance and help in early detection and diagnosis of skin diseases. With the advances in technology, new algorithms have also been proposed to develop more efficient CAD systems. This article reviews various techniques that have been proposed for detection and classification of skin lesions

    Utility of Non-rule-based Visual Matching as a Strategy to Allow Novices to Achieve Skin Lesion Diagnosis

    Get PDF
    Non-analytical reasoning is thought to play a key role in dermatology diagnosis. Considering its potential importance, surprisingly little work has been done to research whether similar identification processes can be supported in non-experts. We describe here a prototype diagnostic support software, which we have used to examine the ability of medical students (at the beginning and end of a dermatology attachment) and lay volunteers, to diagnose 12 images of common skin lesions. Overall, the non-experts using the software had a diagnostic accuracy of 98% (923/936) compared with 33% for the control group (215/648) (Wilcoxon p < 0.0001). We have demonstrated, within the constraints of a simplified clinical model, that novices’ diagnostic scores are significantly increased by the use of a structured image database coupled with matching of index and referent images. The novices achieve this high degree of accuracy without any use of explicit definitions of likeness or rule-based strategies

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Semi-Supervised Semantic Image Segmentation by Deep Diffusion Models and Generative Adversarial Networks

    Get PDF
    Typically, deep learning models for image segmentation tasks are trained using large datasets of images annotated at the pixel level, which can be expensive and highly time-consuming. A way to reduce the amount of annotated images required for training is to adopt a semi-supervised approach. In this regard, generative deep learning models, concretely Generative Adversarial Networks (GANs), have been adapted to semi-supervised training of segmentation tasks. This work proposes MaskGDM, a deep learning architecture combining some ideas from EditGAN, a GAN that jointly models images and their segmentations, together with a generative diffusion model. With careful integration, we find that using a generative diffusion model can improve EditGAN performance results in multiple segmentation datasets, both multi-class and with binary labels. According to the quantitative results obtained, the proposed model improves multi-class image segmentation when compared to the EditGAN and DatasetGAN models, respectively, by 4.5% and 5.0%. Moreover, using the ISIC dataset, our proposal improves the results from other models by up to 11% for the binary image segmentation approach

    A survey, review, and future trends of skin lesion segmentation and classification

    Get PDF
    The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis

    Quantification of cell cycle markers in oral precancer

    Get PDF
    The overall aims of the studies were to obtain objective measures of oral precancerous lesions based upon studies of the cell cycle and to investigate these parameters as possible prognostic indicators with regard to malignant transformation of these lesions. The majority of the precancerous lesions in the present study were dysplastic. These are the lesions which cause the greatest concern clinically with regard to malignant transformation. The first part of the study investigated the S-phase, growth fraction and the relationship of these to each other and with the degree of dysplasia with the aim of achieving objective measurements of the dysplasia and prognostic information. This was achieved by the use of BrdU for labelling of cells in the S-phase and anti-Ki67 antibody as a marker of cells in the growth fraction. The BrdU labelling index was demonstrated to provide an objective assessment of the dysplastic lesions when compared to the semi-objective method of Smith and Pindborg (1969). The ratio of the S-phase to the growth fraction was higher in those lesions which progressed to malignancy and was cited as a possible prognostic indicator. A number of methodological problems were identified from this first part of the study and these were investigated further by the development of techniques in Chapter 3. Firstly, a method was developed to enable the BrdU labelled tissue to be formalin fixed and then allow other cell cycle associated markers to be studied on sequential sections of the same tissue block. Secondly, numerous antigen retrieval techniques were carried out in order to optimise the immunohistochemical staining of Ki67 and subsequently other antibodies utilised in later parts of the study. Normal oral epithelium, derived post-mortem, was studied in Chapter 4 to investigate the apparent underestimation of proliferating cells identified by anti-Ki67 antibody in Chapter 2. It was apparent that, as in dysplastic epithelia, Ki67 significantly underestimated progenitor cells of the morphologically identified progenitor compartment. Ki67 did not identify all of those cells which would have been expected to be in the cell cycle it was originally claimed to do

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
    • 

    corecore