246 research outputs found

    The Impact of Replacing Complex Hand-Crafted Features with Standard Features for Melanoma Classification using Both Hand-Crafted and Deep Features

    No full text
    Melanoma is the deadliest form of skin cancer and it is the most rapidly spreading cancer in the world. An earlier detection of this kind of cancer is curable; hence, earlier detection of melanoma is pre-eminent. Because of this fact, a lot of research is being done in this area especially in automatic detection of melanoma. In this paper, we are proposing an automatic melanoma detection system which utilizes a combination of deep and hand-crafted features. We analyzed the impact of using a simpler and standard hand-crafted feature, in place of complex usual hand-crafted features e.g. shape, texture, diameter, or some custom features. We used a convolutional neural network (CNN) known as deep residual network (ResNet) to extract the deep features and utilized the scale invariant feature descriptor (SIFT) as the hand-crafted feature. The experiments revealed that combining SIFT did not improve the accuracy of the system however, we obtained higher accuracy than state-of-the-art methods with our deep only solution

    Discriminative Representations for Heterogeneous Images and Multimodal Data

    Get PDF
    Histology images of tumor tissue are an important diagnostic and prognostic tool for pathologists. Recently developed molecular methods group tumors into subtypes to further guide treatment decisions, but they are not routinely performed on all patients. A lower cost and repeatable method to predict tumor subtypes from histology could bring benefits to more cancer patients. Further, combining imaging and genomic data types provides a more complete view of the tumor and may improve prognostication and treatment decisions. While molecular and genomic methods capture the state of a small sample of tumor, histological image analysis provides a spatial view and can identify multiple subtypes in a single tumor. This intra-tumor heterogeneity has yet to be fully understood and its quantification may lead to future insights into tumor progression. In this work, I develop methods to learn appropriate features directly from images using dictionary learning or deep learning. I use multiple instance learning to account for intra-tumor variations in subtype during training, improving subtype predictions and providing insights into tumor heterogeneity. I also integrate image and genomic features to learn a projection to a shared space that is also discriminative. This method can be used for cross-modal classification or to improve predictions from images by also learning from genomic data during training, even if only image data is available at test time.Doctor of Philosoph

    Medical Image Analysis using Deep Relational Learning

    Full text link
    In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778

    Psoriasis Skin Disease Classification based on Clinical Images

    Get PDF
    Psoriasis is an autoimmune skin disorder that causes skin plaques to develop into red and scaly patches. It affects millions of people globally. Dermatologists currently employ visual and haptic methods to determine a medical issue's severity. Intelligent medical imaging-based diagnosis systems are now a possibility because of the relatively recent development of deep learning technologies for medical image processing. These systems can help a human expert make better decisions about a patient's health. Convolutional neural networks, or CNNs, on the other hand, have achieved imaging performance levels comparable to, if not better than, those of humans. In the paper, a Dermnet dataset is used. Image preprocessing, fuzzy c-mean-based segmentation, MobileNet-based feature extraction, and a support vector machine (SVM) classification are used for skin disease classification. Dermnet's dataset was investigated for images of skin conditions using three classes Psoriasis, Dermatofibroma, and Melanoma are studied. The performance metrics such as accuracy, precision-recall, and f1-score are evaluated and compared for three classes of skin diseases. Despite working with a smaller dataset, MobileNet with Support Vector Machine outperforms ResNet in terms of accuracy (99.12%), precision (98.65%), and recall (99.66%)

    Diagnóstico automático de melanoma mediante técnicas modernas de aprendizaje automático

    Get PDF
    The incidence and mortality rates of skin cancer remain a huge concern in many countries. According to the latest statistics about melanoma skin cancer, only in the Unites States, 7,650 deaths are expected in 2022, which represents 800 and 470 more deaths than 2020 and 2021, respectively. In 2022, melanoma is ranked as the fifth cause of new cases of cancer, with a total of 99,780 people. This illness is mainly diagnosed with a visual inspection of the skin, then, if doubts remain, a dermoscopic analysis is performed. The development of e_ective non-invasive diagnostic tools for the early stages of the illness should increase quality of life, and decrease the required economic resources. The early diagnosis of skin lesions remains a tough task even for expert dermatologists because of the complexity, variability, dubiousness of the symptoms, and similarities between the different categories among skin lesions. To achieve this goal, previous works have shown that early diagnosis from skin images can benefit greatly from using computational methods. Several studies have applied handcrafted-based methods on high quality dermoscopic and histological images, and on top of that, machine learning techniques, such as the k-nearest neighbors approach, support vector machines and random forest. However, one must bear in mind that although the previous extraction of handcrafted features incorporates an important knowledge base into the analysis, the quality of the extracted descriptors relies heavily on the contribution of experts. Lesion segmentation is also performed manually. The above procedures have a common issue: they are time-consuming manual processes prone to errors. Furthermore, an explicit definition of an intuitive and interpretable feature is hardly achievable, since it depends on pixel intensity space and, therefore, they are not invariant regarding the differences in the input images. On the other hand, the use of mobile devices has sharply increased, which offers an almost unlimited source of data. In the past few years, more and more attention has been paid to designing deep learning models for diagnosing melanoma, more specifically Convolutional Neural Networks. This type of model is able to extract and learn high-level features from raw images and/or other data without the intervention of experts. Several studies showed that deep learning models can overcome handcrafted-based methods, and even match the predictive performance of dermatologists. The International Skin Imaging Collaboration encourages the development of methods for digital skin imaging. Every year since 2016 to 2019, a challenge and a conference have been organized, in which more than 185 teams have participated. However, convolutional models present several issues for skin diagnosis. These models can fit on a wide diversity of non-linear data points, being prone to overfitting on datasets with small numbers of training examples per class and, therefore, attaining a poor generalization capacity. On the other hand, this type of model is sensitive to some characteristics in data, such as large inter-class similarities and intra-class variances, variations in viewpoints, changes in lighting conditions, occlusions, and background clutter, which can be mostly found in non-dermoscopic images. These issues represent challenges for the application of automatic diagnosis techniques in the early phases of the illness. As a consequence of the above, the aim of this Ph.D. thesis is to make significant contributions to the automatic diagnosis of melanoma. The proposals aim to avoid overfitting and improve the generalization capacity of deep models, as well as to achieve a more stable learning and better convergence. Bear in mind that research into deep learning commonly requires an overwhelming processing power in order to train complex architectures. For example, when developing NASNet architecture, researchers used 500 x NVidia P100s - each graphic unit cost from 5,899to5,899 to 7,374, which represents a total of 2,949,500.002,949,500.00 - 3,687,000.00. Unfortunately, the majority of research groups do not have access to such resources, including ours. In this Ph.D. thesis, the use of several techniques has been explored. First, an extensive experimental study was carried out, which included state-of-the-art models and methods to further increase the performance. Well-known techniques were applied, such as data augmentation and transfer learning. Data augmentation is performed in order to balance out the number of instances per category and act as a regularizer in preventing overfitting in neural networks. On the other hand, transfer learning uses weights of a pre-trained model from another task, as the initial condition for the learning of the target network. Results demonstrate that the automatic diagnosis of melanoma is a complex task. However, different techniques are able to mitigate such issues in some degree. Finally, suggestions are given about how to train convolutional models for melanoma diagnosis and future interesting research lines were presented. Next, the discovery of ensemble-based architectures is tackled by using genetic algorithms. The proposal is able to stabilize the training process. This is made possible by finding sub-optimal combinations of abstract features from the ensemble, which are used to train a convolutional block. Then, several predictive blocks are trained at the same time, and the final diagnosis is achieved by combining all individual predictions. We empirically investigate the benefits of the proposal, which shows better convergence, mitigates the overfitting of the model, and improves the generalization performance. On top of that, the proposed model is available online and can be consulted by experts. The next proposal is focused on designing an advanced architecture capable of fusing classical convolutional blocks and a novel model known as Dynamic Routing Between Capsules. This approach addresses the limitations of convolutional blocks by using a set of neurons instead of an individual neuron in order to represent objects. An implicit description of the objects is learned by each capsule, such as position, size, texture, deformation, and orientation. In addition, a hyper-tuning of the main parameters is carried out in order to ensure e_ective learning under limited training data. An extensive experimental study was conducted where the fusion of both methods outperformed six state-of-the-art models. On the other hand, a robust method for melanoma diagnosis, which is inspired on residual connections and Generative Adversarial Networks, is proposed. The architecture is able to produce plausible photorealistic synthetic 512 x 512 skin images, even with small dermoscopic and non-dermoscopic skin image datasets as problema domains. In this manner, the lack of data, the imbalance problems, and the overfitting issues are tackled. Finally, several convolutional modes are extensively trained and evaluated by using the synthetic images, illustrating its effectiveness in the diagnosis of melanoma. In addition, a framework, which is inspired on Active Learning, is proposed. The batch-based query strategy setting proposed in this work enables a more faster training process by learning about the complexity of the data. Such complexities allow us to adjust the training process after each epoch, which leads the model to achieve better performance in a lower number of iterations compared to random mini-batch sampling. Then, the training method is assessed by analyzing both the informativeness value of each image and the predictive performance of the models. An extensive experimental study is conducted, where models trained with the proposal attain significantly better results than the baseline models. The findings suggest that there is still space for improvement in the diagnosis of skin lesions. Structured laboratory data, unstructured narrative data, and in some cases, audio or observational data, are given by radiologists as key points during the interpretation of the prediction. This is particularly true in the diagnosis of melanoma, where substantial clinical context is often essential. For example, symptoms like itches and several shots of a skin lesion during a period of time proving that the lesion is growing, are very likely to suggest cancer. The use of different types of input data could help to improve the performance of medical predictive models. In this regard, a _rst evolutionary algorithm aimed at exploring multimodal multiclass data has been proposed, which surpassed a single-input model. Furthermore, the predictive features extracted by primary capsules could be used to train other models, such as Support Vector Machine

    Deep Learning-Based Prediction of Molecular Tumor Biomarkers from H&E: A Practical Review

    Full text link
    Molecular and genomic properties are critical in selecting cancer treatments to target individual tumors, particularly for immunotherapy. However, the methods to assess such properties are expensive, time-consuming, and often not routinely performed. Applying machine learning to H&E images can provide a more cost-effective screening method. Dozens of studies over the last few years have demonstrated that a variety of molecular biomarkers can be predicted from H&E alone using the advancements of deep learning: molecular alterations, genomic subtypes, protein biomarkers, and even the presence of viruses. This article reviews the diverse applications across cancer types and the methodology to train and validate these models on whole slide images. From bottom-up to pathologist-driven to hybrid approaches, the leading trends include a variety of weakly supervised deep learning-based approaches, as well as mechanisms for training strongly supervised models in select situations. While results of these algorithms look promising, some challenges still persist, including small training sets, rigorous validation, and model explainability. Biomarker prediction models may yield a screening method to determine when to run molecular tests or an alternative when molecular tests are not possible. They also create new opportunities in quantifying intratumoral heterogeneity and predicting patient outcomes.Comment: 20 pages, 2 figure

    Deep Domain Adaptation Learning Framework for Associating Image Features to Tumour Gene Profile

    Get PDF
    While medical imaging and general pathology are routine in cancer diagnosis, genetic sequencing is not always assessable due to the strong phenotypic and genetic heterogeneity of human cancers. Image-genomics integrates medical imaging and genetics to provide a complementary approach to optimise cancer diagnosis by associating tumour imaging traits with clinical data and has demonstrated its potential in identifying imaging surrogates for tumour biomarkers. However, existing image-genomics research has focused on quantifying tumour visual traits according to human understanding, which may not be optimal across different cancer types. The challenge hence lies in the extraction of optimised imaging representations in an objective data-driven manner. Such an approach requires large volumes of annotated image data that are difficult to acquire. We propose a deep domain adaptation learning framework for associating image features to tumour genetic information, exploiting the ability of domain adaptation technique to learn relevant image features from close knowledge domains. Our proposed framework leverages the current state-of-the-art in image object recognition to provide image features to encode subtle variations of tumour phenotypic characteristics with domain adaptation techniques. The proposed framework was evaluated with current state-of-the-art in: (i) tumour histopathology image classification and; (ii) image-genomics associations. The proposed framework demonstrated improved accuracy of tumour classification, as well as providing additional data-derived representations of tumour phenotypic characteristics that exhibit strong image-genomics association. This thesis advances and indicates the potential of image-genomics research to reveal additional imaging surrogates to genetic biomarkers, which has the potential to facilitate cancer diagnosis
    corecore