158 research outputs found

    Dermoscopic dark corner artifacts removal: Friend or foe?

    Get PDF
    Background and Objectives: One of the more significant obstacles in classification of skin cancer is the presence of artifacts. This paper investigates the effect of dark corner artifacts, which result from the use of dermoscopes, on the performance of a deep learning binary classification task. Previous research attempted to remove and inpaint dark corner artifacts, with the intention of creating an ideal condition for models. However, such research has been shown to be inconclusive due to a lack of available datasets with corresponding labels for dark corner artifact cases. Methods: To address these issues, we label 10,250 skin lesion images from publicly available datasets and introduce a balanced dataset with an equal number of melanoma and non-melanoma cases. The training set comprises 6126 images without artifacts, and the testing set comprises 4124 images with dark corner artifacts. We conduct three experiments to provide new understanding on the effects of dark corner artifacts, including inpainted and synthetically generated examples, on a deep learning method. Results: Our results suggest that introducing synthetic dark corner artifacts which have been superimposed onto the training set improved model performance, particularly in terms of the true negative rate. This indicates that deep learning learnt to ignore dark corner artifacts, rather than treating it as melanoma, when dark corner artifacts were introduced into the training set. Further, we propose a new approach to quantifying heatmaps indicating network focus using a root mean square measure of the brightness intensity in the different regions of the heatmaps. Conclusions: The proposed artifact methods can be used in future experiments to help alleviate possible impacts on model performance. Additionally, the newly proposed heatmap quantification analysis will help to better understand the relationships between heatmap results and other model performance metrics

    Medical Image Analysis using Deep Relational Learning

    Full text link
    In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778

    Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review

    Full text link
    Deep learning has become a popular tool for medical image analysis, but the limited availability of training data remains a major challenge, particularly in the medical field where data acquisition can be costly and subject to privacy regulations. Data augmentation techniques offer a solution by artificially increasing the number of training samples, but these techniques often produce limited and unconvincing results. To address this issue, a growing number of studies have proposed the use of deep generative models to generate more realistic and diverse data that conform to the true distribution of the data. In this review, we focus on three types of deep generative models for medical image augmentation: variational autoencoders, generative adversarial networks, and diffusion models. We provide an overview of the current state of the art in each of these models and discuss their potential for use in different downstream tasks in medical imaging, including classification, segmentation, and cross-modal translation. We also evaluate the strengths and limitations of each model and suggest directions for future research in this field. Our goal is to provide a comprehensive review about the use of deep generative models for medical image augmentation and to highlight the potential of these models for improving the performance of deep learning algorithms in medical image analysis

    Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression. For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired. In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de Déu de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    Broadening the Horizon of Adversarial Attacks in Deep Learning

    Get PDF
    152 p.Los modelos de Aprendizaje Automático como las Redes Neuronales Profundas son actualmente el núcleo de una amplia gama de tecnologías aplicadas en tareas críticas, como el reconocimiento facial o la conducción autónoma, en las que tanto la capacidad predictiva como la fiabilidad son requisitos fundamentales. Sin embargo, estos modelos pueden ser fácilmente engañados por inputs manipulados deforma imperceptible para el ser humano, denominados ejemplos adversos (adversarial examples), lo que implica una brecha de seguridad que puede ser explotada por un atacante con fines ilícitos. Dado que estas vulnerabilidades afectan directamente a la integridad y fiabilidad de múltiples sistemas que,progresivamente, están siendo desplegados en aplicaciones del mundo real, es crucial determinar el alcance de dichas vulnerabilidades para poder garantizar así un uso más responsable, informado y seguro de esos sistemas. Por estos motivos, esta tesis doctoral tiene como objetivo principal investigar nuevas nociones de ataques adversos y vulnerabilidades en las Redes Neuronales Profundas. Como resultado de esta investigación, a lo largo de esta tesis se exponen nuevos paradigmas de ataque que exceden o amplían las capacidades de los métodos actualmente disponibles en la literatura, ya que son capaces de alcanzar objetivos más generales, complejos o ambiciosos. Al mismo tiempo, se exponen nuevas brechas de seguridad en casos de uso y escenarios en los que las consecuencias de los ataques adversos no habían sido investigadas con anterioridad. Nuestro trabajo también arroja luz sobre diferentes propiedades de estos modelos que los hacen más vulnerables a los ataques adversos, contribuyendo a una mejor comprensión de estos fenómenos

    Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks

    Full text link
    Robustness has become an important consideration in deep learning. With the help of explainable AI, mismatches between an explained model's decision strategy and the user's domain knowledge (e.g. Clever Hans effects) have been identified as a starting point for improving faulty models. However, it is less clear what to do when the user and the explanation agree. In this paper, we demonstrate that acceptance of explanations by the user is not a guarantee for a machine learning model to be robust against Clever Hans effects, which may remain undetected. Such hidden flaws of the model can nevertheless be mitigated, and we demonstrate this by contributing a new method, Explanation-Guided Exposure Minimization (EGEM), that preemptively prunes variations in the ML model that have not been the subject of positive explanation feedback. Experiments demonstrate that our approach leads to models that strongly reduce their reliance on hidden Clever Hans strategies, and consequently achieve higher accuracy on new data.Comment: 18 pages + supplemen

    Melanoma and nevi subtype histopathological characterization with optical coherence tomography

    Get PDF
    Background: Melanoma incidence has continued to rise in the latest decades, and the forecast is not optimistic. Non-invasive diagnostic imaging techniques such as optical coherence tomography (OCT) are largely studied; however, there is still no agreement on its use for the diagnosis of melanoma. For dermatologists, the differentiation of non-invasive (junctional nevus, compound nevus, intradermal nevus, and melanoma in-situ) versus invasive (superficial spreading melanoma and nodular melanoma) lesions is the key issue in their daily routine. Methods: This work performs a comparative analysis of OCT images using haematoxylin-eosin (HE) and anatomopathological features identified by a pathologist. Then, optical and textural properties are extracted from OCT images with the aim to identify subtle features that could potentially maximize the usefulness of the imaging technique in the identification of the lesion?s potential invasiveness. Results: Preliminary features reveal differences discriminating melanoma in-situ from superficial spreading melanoma and also between melanoma and nevus subtypes that pose a promising baseline for further research. Conclusions: Answering the final goal of diagnosing non-invasive versus invasive lesions with OCT does not seem feasible in the short term, but the obtained results demonstrate a step forward to achieve this.This work has been funded by the Department of Economic Development, Sustainability and the Environment of the Basque Government (Spain) ELKARTEK projects ONKOTOOLS with grant numbers KK-2020/00069, the Spanish Ministry of Science and Education CERVERA project AI4ES with grant numbers CER-20211030, and by the ECSEL JU European project ASTONISH with the grant number 692470, UC Industrial Doctorate DI14

    Understanding deep learning

    Get PDF
    Deep neural networks have reached impressive performance in many tasks in computer vision and its applications. However, research into understanding deep neural networks is challenging due to the evaluation. Since it is unknown which features deep neural networks use, it is hard to empirically evaluate whether a result for which feature is used by a deep neural network is correct. The state- of-the-art for understanding which features a deep neural network uses to reach its prediction is sailiency maps. However, all methods built on sailiency maps share shortcomings that open a gap between the current state-of-the-art and the requirements for understanding deep neural networks. This work describes a method that does not suffer from these shortcomings. To this end, we employ the framework of causal modeling to determine whether a feature is used by the neural network. We present theoretical evidence that our method is able to correctly identify if a feature is used. Furthermore, we demonstrate two studies as empirical evidence. First, we show that our method can further the understanding of automatic skin lesion classifiers. There, we find that some of the features in the ABCD rule are used by the classifiers to identify melanoma but not to identify seborrheic keratosis. In contrast, all classifiers highly rely on the bias variables, particularly the age of the patient and the existence of colorful patches in the input image. Second we apply our method to adversarial debiasing. In adversarial debiasing, we want to stop a neural network from using a known bias variable. We demonstrate in a toy example and an example on real- world images that our approach outperforms the state-of-the-art in adversarial debiasing
    • …
    corecore