42 research outputs found

    A machine learning approach to automatic detection of irregularity in skin lesion border using dermoscopic images

    Get PDF
    Skin lesion border irregularity is considered an important clinical feature for the early diagnosis of melanoma, representing the B feature in the ABCD rule. In this article we propose an automated approach for skin lesion border irregularity detection. The approach involves extracting the skin lesion from the image, detecting the skin lesion border, measuring the border irregularity, training a Convolutional Neural Network and Gaussian naive Bayes ensemble, to the automatic detection of border irregularity, which results in an objective decision on whether the skin lesion border is considered regular or irregular. The approach achieves outstanding results, obtaining an accuracy, sensitivity, specificity, and F-score of 93.6%, 100%, 92.5% and 96.1%, respectively

    Novel melanoma diagnosis and prognosis methods based on 3D fringe projection

    Get PDF
    This project aims to find an effective and non-invasive methodology to assist in the diagnosis of skin lesions using 3D profile features.Este proyecto tiene como objetivo el desarrollo de una metodología eficaz y no invasiva para la asistencia en el diagnóstico de lesiones cutáneas utilizando sus características 3D.Aquest projecte té com a objectiu desenvolupar una metodologia no invasiva i eficaç per ajudar en el diagnòstic de lesions de la pell mitjançant caraterístiques 3D

    Towards the early detection of melanoma by automating the measurement of asymmetry, border irregularity, color variegation, and diameter in dermoscopy images

    Get PDF
    The incidence of melanoma, the most aggressive form of skin cancer, has increased more than many other cancers in recent years. The aim of this thesis is to develop objective measures and automated methods to evaluate the ABCD (Asymmetry, Border irregularity, Color variegation, and Diameter) rule features in dermoscopy images, a popular method that provides a simple means for appraisal of pigmented lesions that might require further investigation by a specialist. However, research gaps in evaluating those features have been encountered in literature. To extract skin lesions, two segmentation approaches that are robust to inherent dermoscopic image problems have been proposed, and showed to outperform other approaches used in literature. Measures for finding asymmetry and border irregularity have been developed. The asymmetry measure describes invariant features, provides a compactness representation of the image, and captures discriminative properties of skin lesions. The border irregularity measure, which is preceded by a border detection step carried out by a novel edge detection algorithm that represents the image in terms of fuzzy concepts, is rotation invariant, characterizes the complexity of the shape associated with the border, and robust to noise. To automate the measures, classification methods that are based on ensemble learning and which take the ambiguity of data into consideration have been proposed. Color variegation was evaluated by determining the suspicious colors of melanoma from a generated color palette for the image, and the diameter of the skin lesion was measured using a shape descriptor that was eventually represented in millimeters. The work developed in the thesis reflects the automatic dermoscopic image analysis standard pipeline, and a computer-aided diagnosis system (CAD) for the automatic detection and objective evaluation of the ABCD rule features. It can be used as an objective bedside tool serving as a diagnostic adjunct in the clinical assessment of skin lesions

    High-Level Intuitive Features (HLIFs) for Melanoma Detection

    Get PDF
    Feature extraction of segmented skin lesions is a pivotal step for implementing accurate decision support systems. Existing feature sets combine many ad-hoc calculations and are unable to easily provide intuitive diagnostic reasoning. This thesis presents the design and evaluation of a set of features for objectively detecting melanoma in an intuitive and accurate manner. We call these "high-level intuitive features" (HLIFs). The current clinical standard for detecting melanoma, the deadliest form of skin cancer, is visual inspection of the skin's surface. A widely adopted rule for detecting melanoma is the "ABCD" rule, whereby the doctor identifies the presence of Asymmetry, Border irregularity, Colour patterns, and Diameter. The adoption of specialized medical devices for this cause is extremely slow due to the added temporal and financial burden. Therefore, recent research efforts have focused on detection support systems that analyse images acquired with standard consumer-grade camera images of skin lesions. The central benefit of these systems is the provision of technology with low barriers to adoption. Recently proposed skin lesion feature sets have been large sets of low-level features attempting to model the widely adopted ABCD criteria of melanoma. These result in high-dimensional feature spaces, which are computationally expensive and sparse due to the lack of available clinical data. It is difficult to convey diagnostic rationale using these feature sets due to their inherent ad-hoc mathematical nature. This thesis presents and applies a generic framework for designing HLIFs for decision support systems relying on intuitive observations. By definition, a HLIF is designed explicitly to model a human-observable characteristic such that the feature score can be intuited by the user. Thus, along with the classification label, visual rationale can be provided to further support the prediction. This thesis applies the HLIF framework to design 10 HLIFs for skin cancer detection, following the ABCD rule. That is, HLIFs modeling asymmetry, border irregularity, and colour patterns are presented. This thesis evaluates the effectiveness of HLIFs in a standard classification setting. Using publicly-available images obtained in unconstrained environments, the set of HLIFs is compared with and against a recently published low-level feature set. Since the focus is on evaluating the features, illumination correction and manually-defined segmentations are used, along with a linear classification scheme. The promising results indicate that HLIFs capture more relevant information than low-level features, and that concatenating the HLIFs to the low-level feature set results in improved accuracy metrics. Visual intuitive information is provided to indicate the ability of providing intuitive diagnostic reasoning to the user

    Classification of clinical outcomes using high-throughput and clinical informatics.

    Get PDF
    It is widely recognized that many cancer therapies are effective only for a subset of patients. However clinical studies are most often powered to detect an overall treatment effect. To address this issue, classification methods are increasingly being used to predict a subset of patients which respond differently to treatment. This study begins with a brief history of classification methods with an emphasis on applications involving melanoma. Nonparametric methods suitable for predicting subsets of patients responding differently to treatment are then reviewed. Each method has different ways of incorporating continuous, categorical, clinical and high-throughput covariates. For nonparametric and parametric methods, distance measures specific to the method are used to make classification decisions. Approaches are outlined which employ these distances to measure treatment interactions and predict patients more sensitive to treatment. Simulations are also carried out to examine empirical power of some of these classification methods in an adaptive signature design. Results were compared with logistic regression models. It was found that parametric and nonparametric methods performed reasonably well. Relative performance of the methods depends on the simulation scenario. Finally a method was developed to evaluate power and sample size needed for an adaptive signature design in order to predict the subset of patients sensitive to treatment. It is hoped that this study will stimulate more development of nonparametric and parametric methods to predict subsets of patients responding differently to treatment

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Understanding deep learning

    Get PDF
    Deep neural networks have reached impressive performance in many tasks in computer vision and its applications. However, research into understanding deep neural networks is challenging due to the evaluation. Since it is unknown which features deep neural networks use, it is hard to empirically evaluate whether a result for which feature is used by a deep neural network is correct. The state- of-the-art for understanding which features a deep neural network uses to reach its prediction is sailiency maps. However, all methods built on sailiency maps share shortcomings that open a gap between the current state-of-the-art and the requirements for understanding deep neural networks. This work describes a method that does not suffer from these shortcomings. To this end, we employ the framework of causal modeling to determine whether a feature is used by the neural network. We present theoretical evidence that our method is able to correctly identify if a feature is used. Furthermore, we demonstrate two studies as empirical evidence. First, we show that our method can further the understanding of automatic skin lesion classifiers. There, we find that some of the features in the ABCD rule are used by the classifiers to identify melanoma but not to identify seborrheic keratosis. In contrast, all classifiers highly rely on the bias variables, particularly the age of the patient and the existence of colorful patches in the input image. Second we apply our method to adversarial debiasing. In adversarial debiasing, we want to stop a neural network from using a known bias variable. We demonstrate in a toy example and an example on real- world images that our approach outperforms the state-of-the-art in adversarial debiasing

    Diagnóstico automático de melanoma mediante técnicas modernas de aprendizaje automático

    Get PDF
    The incidence and mortality rates of skin cancer remain a huge concern in many countries. According to the latest statistics about melanoma skin cancer, only in the Unites States, 7,650 deaths are expected in 2022, which represents 800 and 470 more deaths than 2020 and 2021, respectively. In 2022, melanoma is ranked as the fifth cause of new cases of cancer, with a total of 99,780 people. This illness is mainly diagnosed with a visual inspection of the skin, then, if doubts remain, a dermoscopic analysis is performed. The development of e_ective non-invasive diagnostic tools for the early stages of the illness should increase quality of life, and decrease the required economic resources. The early diagnosis of skin lesions remains a tough task even for expert dermatologists because of the complexity, variability, dubiousness of the symptoms, and similarities between the different categories among skin lesions. To achieve this goal, previous works have shown that early diagnosis from skin images can benefit greatly from using computational methods. Several studies have applied handcrafted-based methods on high quality dermoscopic and histological images, and on top of that, machine learning techniques, such as the k-nearest neighbors approach, support vector machines and random forest. However, one must bear in mind that although the previous extraction of handcrafted features incorporates an important knowledge base into the analysis, the quality of the extracted descriptors relies heavily on the contribution of experts. Lesion segmentation is also performed manually. The above procedures have a common issue: they are time-consuming manual processes prone to errors. Furthermore, an explicit definition of an intuitive and interpretable feature is hardly achievable, since it depends on pixel intensity space and, therefore, they are not invariant regarding the differences in the input images. On the other hand, the use of mobile devices has sharply increased, which offers an almost unlimited source of data. In the past few years, more and more attention has been paid to designing deep learning models for diagnosing melanoma, more specifically Convolutional Neural Networks. This type of model is able to extract and learn high-level features from raw images and/or other data without the intervention of experts. Several studies showed that deep learning models can overcome handcrafted-based methods, and even match the predictive performance of dermatologists. The International Skin Imaging Collaboration encourages the development of methods for digital skin imaging. Every year since 2016 to 2019, a challenge and a conference have been organized, in which more than 185 teams have participated. However, convolutional models present several issues for skin diagnosis. These models can fit on a wide diversity of non-linear data points, being prone to overfitting on datasets with small numbers of training examples per class and, therefore, attaining a poor generalization capacity. On the other hand, this type of model is sensitive to some characteristics in data, such as large inter-class similarities and intra-class variances, variations in viewpoints, changes in lighting conditions, occlusions, and background clutter, which can be mostly found in non-dermoscopic images. These issues represent challenges for the application of automatic diagnosis techniques in the early phases of the illness. As a consequence of the above, the aim of this Ph.D. thesis is to make significant contributions to the automatic diagnosis of melanoma. The proposals aim to avoid overfitting and improve the generalization capacity of deep models, as well as to achieve a more stable learning and better convergence. Bear in mind that research into deep learning commonly requires an overwhelming processing power in order to train complex architectures. For example, when developing NASNet architecture, researchers used 500 x NVidia P100s - each graphic unit cost from 5,899to5,899 to 7,374, which represents a total of 2,949,500.00−2,949,500.00 - 3,687,000.00. Unfortunately, the majority of research groups do not have access to such resources, including ours. In this Ph.D. thesis, the use of several techniques has been explored. First, an extensive experimental study was carried out, which included state-of-the-art models and methods to further increase the performance. Well-known techniques were applied, such as data augmentation and transfer learning. Data augmentation is performed in order to balance out the number of instances per category and act as a regularizer in preventing overfitting in neural networks. On the other hand, transfer learning uses weights of a pre-trained model from another task, as the initial condition for the learning of the target network. Results demonstrate that the automatic diagnosis of melanoma is a complex task. However, different techniques are able to mitigate such issues in some degree. Finally, suggestions are given about how to train convolutional models for melanoma diagnosis and future interesting research lines were presented. Next, the discovery of ensemble-based architectures is tackled by using genetic algorithms. The proposal is able to stabilize the training process. This is made possible by finding sub-optimal combinations of abstract features from the ensemble, which are used to train a convolutional block. Then, several predictive blocks are trained at the same time, and the final diagnosis is achieved by combining all individual predictions. We empirically investigate the benefits of the proposal, which shows better convergence, mitigates the overfitting of the model, and improves the generalization performance. On top of that, the proposed model is available online and can be consulted by experts. The next proposal is focused on designing an advanced architecture capable of fusing classical convolutional blocks and a novel model known as Dynamic Routing Between Capsules. This approach addresses the limitations of convolutional blocks by using a set of neurons instead of an individual neuron in order to represent objects. An implicit description of the objects is learned by each capsule, such as position, size, texture, deformation, and orientation. In addition, a hyper-tuning of the main parameters is carried out in order to ensure e_ective learning under limited training data. An extensive experimental study was conducted where the fusion of both methods outperformed six state-of-the-art models. On the other hand, a robust method for melanoma diagnosis, which is inspired on residual connections and Generative Adversarial Networks, is proposed. The architecture is able to produce plausible photorealistic synthetic 512 x 512 skin images, even with small dermoscopic and non-dermoscopic skin image datasets as problema domains. In this manner, the lack of data, the imbalance problems, and the overfitting issues are tackled. Finally, several convolutional modes are extensively trained and evaluated by using the synthetic images, illustrating its effectiveness in the diagnosis of melanoma. In addition, a framework, which is inspired on Active Learning, is proposed. The batch-based query strategy setting proposed in this work enables a more faster training process by learning about the complexity of the data. Such complexities allow us to adjust the training process after each epoch, which leads the model to achieve better performance in a lower number of iterations compared to random mini-batch sampling. Then, the training method is assessed by analyzing both the informativeness value of each image and the predictive performance of the models. An extensive experimental study is conducted, where models trained with the proposal attain significantly better results than the baseline models. The findings suggest that there is still space for improvement in the diagnosis of skin lesions. Structured laboratory data, unstructured narrative data, and in some cases, audio or observational data, are given by radiologists as key points during the interpretation of the prediction. This is particularly true in the diagnosis of melanoma, where substantial clinical context is often essential. For example, symptoms like itches and several shots of a skin lesion during a period of time proving that the lesion is growing, are very likely to suggest cancer. The use of different types of input data could help to improve the performance of medical predictive models. In this regard, a _rst evolutionary algorithm aimed at exploring multimodal multiclass data has been proposed, which surpassed a single-input model. Furthermore, the predictive features extracted by primary capsules could be used to train other models, such as Support Vector Machine
    corecore