867 research outputs found

    Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

    Get PDF
    With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho

    임상술기 향상을 위한 딥러닝 기법 연구: 대장내시경 진단 및 로봇수술 술기 평가에 적용

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 협동과정 의용생체공학전공, 2020. 8. 김희찬.This paper presents deep learning-based methods for improving performance of clinicians. Novel methods were applied to the following two clinical cases and the results were evaluated. In the first study, a deep learning-based polyp classification algorithm for improving clinical performance of endoscopist during colonoscopy diagnosis was developed. Colonoscopy is the main method for diagnosing adenomatous polyp, which can multiply into a colorectal cancer and hyperplastic polyps. The classification algorithm was developed using convolutional neural network (CNN), trained with colorectal polyp images taken by a narrow-band imaging colonoscopy. The proposed method is built around an automatic machine learning (AutoML) which searches for the optimal architecture of CNN for colorectal polyp image classification and trains the weights of the architecture. In addition, gradient-weighted class activation mapping technique was used to overlay the probabilistic basis of the prediction result on the polyp location to aid the endoscopists visually. To verify the improvement in diagnostic performance, the efficacy of endoscopists with varying proficiency levels were compared with or without the aid of the proposed polyp classification algorithm. The results confirmed that, on average, diagnostic accuracy was improved and diagnosis time was shortened in all proficiency groups significantly. In the second study, a surgical instruments tracking algorithm for robotic surgery video was developed, and a model for quantitatively evaluating the surgeons surgical skill based on the acquired motion information of the surgical instruments was proposed. The movement of surgical instruments is the main component of evaluation for surgical skill. Therefore, the focus of this study was develop an automatic surgical instruments tracking algorithm, and to overcome the limitations presented by previous methods. The instance segmentation framework was developed to solve the instrument occlusion issue, and a tracking framework composed of a tracker and a re-identification algorithm was developed to maintain the type of surgical instruments being tracked in the video. In addition, algorithms for detecting the tip position of instruments and arm-indicator were developed to acquire the movement of devices specialized for the robotic surgery video. The performance of the proposed method was evaluated by measuring the difference between the predicted tip position and the ground truth position of the instruments using root mean square error, area under the curve, and Pearsons correlation analysis. Furthermore, motion metrics were calculated from the movement of surgical instruments, and a machine learning-based robotic surgical skill evaluation model was developed based on these metrics. These models were used to evaluate clinicians, and results were similar in the developed evaluation models, the Objective Structured Assessment of Technical Skill (OSATS), and the Global Evaluative Assessment of Robotic Surgery (GEARS) evaluation methods. In this study, deep learning technology was applied to colorectal polyp images for a polyp classification, and to robotic surgery videos for surgical instruments tracking. The improvement in clinical performance with the aid of these methods were evaluated and verified.본 논문은 의료진의 임상술기 능력을 향상시키기 위하여 새로운 딥러닝 기법들을 제안하고 다음 두 가지 실례에 대해 적용하여 그 결과를 평가하였다. 첫 번째 연구에서는 대장내시경으로 광학 진단 시, 내시경 전문의의 진단 능력을 향상시키기 위하여 딥러닝 기반의 용종 분류 알고리즘을 개발하고, 내시경 전문의의 진단 능력 향상 여부를 검증하고자 하였다. 대장내시경 검사로 암종으로 증식할 수 있는 선종과 과증식성 용종을 진단하는 것은 중요하다. 본 연구에서는 협대역 영상 내시경으로 촬영한 대장 용종 영상으로 합성곱 신경망을 학습하여 분류 알고리즘을 개발하였다. 제안하는 알고리즘은 자동 기계학습 (AutoML) 방법으로, 대장 용종 영상에 최적화된 합성곱 신경망 구조를 찾고 신경망의 가중치를 학습하였다. 또한 기울기-가중치 클래스 활성화 맵핑 기법을 이용하여 개발한 합성곱 신경망 결과의 확률적 근거를 용종 위치에 시각적으로 나타나도록 함으로 내시경 전문의의 진단을 돕도록 하였다. 마지막으로, 숙련도 그룹별로 내시경 전문의가 용종 분류 알고리즘의 결과를 참고하였을 때 진단 능력이 향상되었는지 비교 실험을 진행하였고, 모든 그룹에서 유의미하게 진단 정확도가 향상되고 진단 시간이 단축되었음을 확인하였다. 두 번째 연구에서는 로봇수술 동영상에서 수술도구 위치 추적 알고리즘을 개발하고, 획득한 수술도구의 움직임 정보를 바탕으로 수술자의 숙련도를 정량적으로 평가하는 모델을 제안하였다. 수술도구의 움직임은 수술자의 로봇수술 숙련도를 평가하기 위한 주요한 정보이다. 따라서 본 연구는 딥러닝 기반의 자동 수술도구 추적 알고리즘을 개발하였으며, 다음 두가지 선행연구의 한계점을 극복하였다. 인스턴스 분할 (Instance Segmentation) 프레임웍을 개발하여 폐색 (Occlusion) 문제를 해결하였고, 추적기 (Tracker)와 재식별화 (Re-Identification) 알고리즘으로 구성된 추적 프레임웍을 개발하여 동영상에서 추적하는 수술도구의 종류가 유지되도록 하였다. 또한 로봇수술 동영상의 특수성을 고려하여 수술도구의 움직임을 획득하기위해 수술도구 끝 위치와 로봇 팔-인디케이터 (Arm-Indicator) 인식 알고리즘을 개발하였다. 제안하는 알고리즘의 성능은 예측한 수술도구 끝 위치와 정답 위치 간의 평균 제곱근 오차, 곡선 아래 면적, 피어슨 상관분석으로 평가하였다. 마지막으로, 수술도구의 움직임으로부터 움직임 지표를 계산하고 이를 바탕으로 기계학습 기반의 로봇수술 숙련도 평가 모델을 개발하였다. 개발한 평가 모델은 기존의 Objective Structured Assessment of Technical Skill (OSATS), Global Evaluative Assessment of Robotic Surgery (GEARS) 평가 방법과 유사한 성능을 보임을 확인하였다. 본 논문은 의료진의 임상술기 능력을 향상시키기 위하여 대장 용종 영상과 로봇수술 동영상에 딥러닝 기술을 적용하고 그 유효성을 확인하였으며, 향후에 제안하는 방법이 임상에서 사용되고 있는 진단 및 평가 방법의 대안이 될 것으로 기대한다.Chapter 1 General Introduction 1 1.1 Deep Learning for Medical Image Analysis 1 1.2 Deep Learning for Colonoscipic Diagnosis 2 1.3 Deep Learning for Robotic Surgical Skill Assessment 3 1.4 Thesis Objectives 5 Chapter 2 Optical Diagnosis of Colorectal Polyps using Deep Learning with Visual Explanations 7 2.1 Introduction 7 2.1.1 Background 7 2.1.2 Needs 8 2.1.3 Related Work 9 2.2 Methods 11 2.2.1 Study Design 11 2.2.2 Dataset 14 2.2.3 Preprocessing 17 2.2.4 Convolutional Neural Networks (CNN) 21 2.2.4.1 Standard CNN 21 2.2.4.2 Search for CNN Architecture 22 2.2.4.3 Searched CNN Training 23 2.2.4.4 Visual Explanation 24 2.2.5 Evaluation of CNN and Endoscopist Performances 25 2.3 Experiments and Results 27 2.3.1 CNN Performance 27 2.3.2 Results of Visual Explanation 31 2.3.3 Endoscopist with CNN Performance 33 2.4 Discussion 45 2.4.1 Research Significance 45 2.4.2 Limitations 47 2.5 Conclusion 49 Chapter 3 Surgical Skill Assessment during Robotic Surgery by Deep Learning-based Surgical Instrument Tracking 50 3.1 Introduction 50 3.1.1 Background 50 3.1.2 Needs 51 3.1.3 Related Work 52 3.2 Methods 56 3.2.1 Study Design 56 3.2.2 Dataset 59 3.2.3 Instance Segmentation Framework 63 3.2.4 Tracking Framework 66 3.2.4.1 Tracker 66 3.2.4.2 Re-identification 68 3.2.5 Surgical Instrument Tip Detection 69 3.2.6 Arm-Indicator Recognition 71 3.2.7 Surgical Skill Prediction Model 71 3.3 Experiments and Results 78 3.3.1 Performance of Instance Segmentation Framework 78 3.3.2 Performance of Tracking Framework 82 3.3.3 Evaluation of Surgical Instruments Trajectory 83 3.3.4 Evaluation of Surgical Skill Prediction Model 86 3.4 Discussion 90 3.4.1 Research Significance 90 3.4.2 Limitations 92 3.5 Conclusion 96 Chapter 4 Summary and Future Works 97 4.1 Thesis Summary 97 4.2 Limitations and Future Works 98 Bibliography 100 Abstract in Korean 116 Acknowledgement 119Docto

    Attention Mechanisms in Medical Image Segmentation: A Survey

    Full text link
    Medical image segmentation plays an important role in computer-aided diagnosis. Attention mechanisms that distinguish important parts from irrelevant parts have been widely used in medical image segmentation tasks. This paper systematically reviews the basic principles of attention mechanisms and their applications in medical image segmentation. First, we review the basic concepts of attention mechanism and formulation. Second, we surveyed over 300 articles related to medical image segmentation, and divided them into two groups based on their attention mechanisms, non-Transformer attention and Transformer attention. In each group, we deeply analyze the attention mechanisms from three aspects based on the current literature work, i.e., the principle of the mechanism (what to use), implementation methods (how to use), and application tasks (where to use). We also thoroughly analyzed the advantages and limitations of their applications to different tasks. Finally, we summarize the current state of research and shortcomings in the field, and discuss the potential challenges in the future, including task specificity, robustness, standard evaluation, etc. We hope that this review can showcase the overall research context of traditional and Transformer attention methods, provide a clear reference for subsequent research, and inspire more advanced attention research, not only in medical image segmentation, but also in other image analysis scenarios.Comment: Submitted to Medical Image Analysis, survey paper, 34 pages, over 300 reference

    Deep Learning-based Solutions to Improve Diagnosis in Wireless Capsule Endoscopy

    Full text link
    [eng] Deep Learning (DL) models have gained extensive attention due to their remarkable performance in a wide range of real-world applications, particularly in computer vision. This achievement, combined with the increase in available medical records, has made it possible to open up new opportunities for analyzing and interpreting healthcare data. This symbiotic relationship can enhance the diagnostic process by identifying abnormalities, patterns, and trends, resulting in more precise, personalized, and effective healthcare for patients. Wireless Capsule Endoscopy (WCE) is a non-invasive medical imaging technique used to visualize the entire Gastrointestinal (GI) tract. Up to this moment, physicians meticulously review the captured frames to identify pathologies and diagnose patients. This manual process is time- consuming and prone to errors due to the challenges of interpreting the complex nature of WCE procedures. Thus, it demands a high level of attention, expertise, and experience. To overcome these drawbacks, shorten the screening process, and improve the diagnosis, efficient and accurate DL methods are required. This thesis proposes DL solutions to the following problems encountered in the analysis of WCE studies: pathology detection, anatomical landmark identification, and Out-of-Distribution (OOD) sample handling. These solutions aim to achieve robust systems that minimize the duration of the video analysis and reduce the number of undetected lesions. Throughout their development, several DL drawbacks have appeared, including small and imbalanced datasets. These limitations have also been addressed, ensuring that they do not hinder the generalization of neural networks, leading to suboptimal performance and overfitting. To address the previous WCE problems and overcome the DL challenges, the proposed systems adopt various strategies that utilize the power advantage of Triplet Loss (TL) and Self-Supervised Learning (SSL) techniques. Mainly, TL has been used to improve the generalization of the models, while SSL methods have been employed to leverage the unlabeled data to obtain useful representations. The presented methods achieve State-of-the-art results in the aforementioned medical problems and contribute to the ongoing research to improve the diagnostic of WCE studies.[cat] Els models d’aprenentatge profund (AP) han acaparat molta atenció a causa del seu rendiment en una àmplia gamma d'aplicacions del món real, especialment en visió per ordinador. Aquest fet, combinat amb l'increment de registres mèdics disponibles, ha permès obrir noves oportunitats per analitzar i interpretar les dades sanitàries. Aquesta relació simbiòtica pot millorar el procés de diagnòstic identificant anomalies, patrons i tendències, amb la conseqüent obtenció de diagnòstics sanitaris més precisos, personalitzats i eficients per als pacients. La Capsula endoscòpica (WCE) és una tècnica d'imatge mèdica no invasiva utilitzada per visualitzar tot el tracte gastrointestinal (GI). Fins ara, els metges revisen minuciosament els fotogrames capturats per identificar patologies i diagnosticar pacients. Aquest procés manual requereix temps i és propens a errors. Per tant, exigeix un alt nivell d'atenció, experiència i especialització. Per superar aquests inconvenients, reduir la durada del procés de detecció i millorar el diagnòstic, es requereixen mètodes eficients i precisos d’AP. Aquesta tesi proposa solucions que utilitzen AP per als següents problemes trobats en l'anàlisi dels estudis de WCE: detecció de patologies, identificació de punts de referència anatòmics i gestió de mostres que pertanyen fora del domini. Aquestes solucions tenen com a objectiu aconseguir sistemes robustos que minimitzin la durada de l'anàlisi del vídeo i redueixin el nombre de lesions no detectades. Durant el seu desenvolupament, han sorgit diversos inconvenients relacionats amb l’AP, com ara conjunts de dades petits i desequilibrats. Aquestes limitacions també s'han abordat per assegurar que no obstaculitzin la generalització de les xarxes neuronals, evitant un rendiment subòptim. Per abordar els problemes anteriors de WCE i superar els reptes d’AP, els sistemes proposats adopten diverses estratègies que aprofiten l'avantatge de la Triplet Loss (TL) i les tècniques d’auto-aprenentatge. Principalment, s'ha utilitzat TL per millorar la generalització dels models, mentre que els mètodes d’autoaprenentatge s'han emprat per aprofitar les dades sense etiquetar i obtenir representacions útils. Els mètodes presentats aconsegueixen bons resultats en els problemes mèdics esmentats i contribueixen a la investigació en curs per millorar el diagnòstic dels estudis de WCE

    Redes neurais convolucionais para deteção de landmarks gástricas

    Get PDF
    Gastric cancer is the fifth most incident cancer in the world and, when diagnosed at an advanced stage, its survival rate is only 5%-25%, providing that it is essential that the cancer is detected at an early stage. However, physicians specialized in this diagnosis have difficulties in detecting early lesions during a diagnostic examination, esophagogastroduodenoscopy (EGD). Early lesions on the walls of the digestive system are imperceptible and confounded with the stomach mucosa, being difficult to detect. On the other hand, physicians run the risk of not covering all areas of the stomach during diagnosis, especially areas that may have lesions. The introduction of artificial intelligence into this diagnostic method may help to detect gastric cancer at an earlier stage. The implementation of a system capable of monitoring all areas of the digestive system during EGD would be a solution to prevent the diagnosis of gastric cancer in advanced states. This work focuses on the study of upper gastrointestinal (GI) landmarks monitoring, which are anatomical areas of the digestive system more conducive to the appearance of lesions and that allow better control of the missed areas during EGD exam. The use of convolutional neural networks (CNNs) in GI landmarks monitoring has been a great target of study by the scientific community, with such networks having a good capacity to extract features that better characterize EGD images. The aim of this work consisted in testing new automatic algorithms, specifically CNN-based systems able to detect upper GI landmarks to avoid the presence of blind spots during EGD to increase the quality of endoscopic exams. In contrast with related works in the literature, in this work we used upper GI landmarks images closer to real-world environments. In particular, images for each anatomical landmark class include both examples affected by pathologies and healthy tissue. We tested some pre-trained architectures as the ResNet-50, DenseNet-121, and VGG-16. For each pre-trained architecture, we tested different learning approaches, including the use of class weights (CW), the use of batch normalization and dropout layers, and the use of data augmentation to train the network. The CW ResNet-50 achieved an accuracy of 71.79% and a Mathews Correlation Coefficient (MCC) of 65.06%. In current state-of-art studies, only supervised learning approaches were used to classify EGD images. On the other hand, in our work, we tested the use of unsupervised learning to increase classification performance. In particular, convolutional autoencoder architectures to extract representative features from unlabeled GI images and concatenated their outputs withs with the CW ResNet-50 architecture. We achieved an accuracy of 72.45% and an MCC of 65.08%.O cancro gástrico é o quinto cancro mais incidente no mundo e quando diagnosticado numa fase avançada a taxa de sobrevivência é de apenas 5%-25%. Assim, é essencial que este cancro seja detetado numa fase precoce. No entanto, os médicos especializados neste diagnóstico nem sempre são capazes de uma boa performance de deteção durante o exame de diagnóstico, a esofagogastroduodenoscopia (EGD). As lesões precoces nas paredes do sistema digestivo são quase impercetíveis e confundíveis com a mucosa do estômago, sendo difíceis de detetar. Por outro lado, os médicos correm o risco de não cobrirem todas as áreas do estômago durante o diagnóstico, podendo estas áreas ter lesões. A introdução da inteligência artificial neste método de diagnóstico poderá ajudar a detetar o cancro gástrico numa fase mais precoce. A implementação de um sistema capaz de fazer a monitorização de todas as áreas do sistema digestivo durante a EGD seria uma solução de forma a prevenir o diagnóstico de cancro gástrico em estados avançados. Este trabalho tem como foco o estudo da monitorização de landmarks gastrointestinais (GI) superiores, que são zonas anatómicas do sistema digestivo mais propícias ao surgimento de lesões e que permitem fazer um melhor controlo das áreas esquecidas durante a EGD. O uso de redes neurais convolucionais (CNNs) na monitorização de landmarks GI tem sido grande alvo de estudo pela comunidade científica, por serem redes com uma boa capacidade de extração features que melhor caraterizam as imagens da EGD. O objetivo deste trabalho consistiu em testar novos algoritmos automáticos baseados em CNNs capazes de detetar landmarks GI superiores para evitar a presença áreas não cobertas durante a EGD, aumentando a qualidade deste exame. Este trabalho difere de outros estudos porque foram usadas classes de landmarks GI superiores mais próximas do ambiente real da EGD. Dentro de cada classe incluímos imagens com patologias e de tecido saudável da respetiva zona anatómica, ao contrário dos demais estudos. Nos estudos apresentados no estado de arte apenas foram consideradas classes de landmarks com tecido saudável em tarefas de deteção de landmarks GI. Testámos algumas arquiteturas pré-treinadas como a ResNet-50, a DenseNet-121 e a VGG-16. Para cada arquitetura pré-treinada, testámos algumas variáveis: o uso de class weights (CW), o uso das camadas batch normalization e dropout, e o uso de data augmentation. A arquitetura CW ResNet-50 atingiu uma accuracy de 71,79% e um coeficiente de correlação de Mathews (MCC) de 65,06%. Nos estudos apresentados no estado de arte, apenas foram estudados sistemas de supervised learning para classificação de imagens EGD enquanto, que no nosso trabalho, foram também testados sistemas de unsupervised learning para aumentar o desempenho da classificação. Em particular, arquiteturas autoencoder convolucionais para extração de features de imagens GI sem labels. Assim, concatenámos os outputs das arquiteturas autoencoder convolucionais com a arquitetura CW ResNet-50 e alcançamos uma accuracy de 72,45% e um MCC de 65,08%.Mestrado em Engenharia Biomédic

    Algorithms and Applications of Novel Capsule Networks

    Get PDF
    Convolutional neural networks, despite their profound impact in countless domains, suffer from significant shortcomings. Linearly-combined scalar feature representations and max pooling operations lead to spatial ambiguities and a lack of robustness to pose variations. Capsule networks can potentially alleviate these issues by storing and routing the pose information of extracted features through their architectures, seeking agreement between the lower-level predictions of higher-level poses at each layer. In this dissertation, we make several key contributions to advance the algorithms of capsule networks in segmentation and classification applications. We create the first ever capsule-based segmentation network in the literature, SegCaps, by introducing a novel locally-constrained dynamic routing algorithm, transformation matrix sharing, the concept of a deconvolutional capsule, extension of the reconstruction regularization to segmentation, and a new encoder-decoder capsule architecture. Following this, we design a capsule-based diagnosis network, D-Caps, which builds off SegCaps and introduces a novel capsule-average pooling technique to handle to larger medical imaging data. Finally, we design an explainable capsule network, X-Caps, which encodes high-level visual object attributes within its capsules by utilizing a multi-task framework and a novel routing sigmoid function which independently routes information from child capsules to parents. Predictions come with human-level explanations, via object attributes, and a confidence score, by training our network directly on the distribution of expert labels, modeling inter-observer agreement and punishing over/under confidence during training. This body of work constitutes significant algorithmic advances to the application of capsule networks, especially in real-world biomedical imaging data

    Uncertainty, interpretability and dataset limitations in Deep Learning

    Full text link
    [eng] Deep Learning (DL) has gained traction in the last years thanks to the exponential increase in compute power. New techniques and methods are published at a daily basis, and records are being set across multiple disciplines. Undeniably, DL has brought a revolution to the machine learning field and to our lives. However, not everything has been resolved and some considerations must be taken into account. For instance, obtaining uncertainty measures and bounds is still an open problem. Models should be able to capture and express the confidence they have in their decisions, and Artificial Neural Networks (ANN) are known to lack in this regard. Be it through out of distribution samples, adversarial attacks, or simply unrelated or nonsensical inputs, ANN models demonstrate an unfounded and incorrect tendency to still output high probabilities. Likewise, interpretability remains an unresolved question. Some fields not only need but rely on being able to provide human interpretations of the thought process of models. ANNs, and specially deep models trained with DL, are hard to reason about. Last but not least, there is a tendency that indicates that models are getting deeper and more complex. At the same time, to cope with the increasing number of parameters, datasets are required to be of higher quality and, usually, larger. Not all research, and even less real world applications, can keep with the increasing demands. Therefore, taking into account the previous issues, the main aim of this thesis is to provide methods and frameworks to tackle each of them. These approaches should be applicable to any suitable field and dataset, and are employed with real world datasets as proof of concept. First, we propose a method that provides interpretability with respect to the results through uncertainty measures. The model in question is capable of reasoning about the uncertainty inherent in data and leverages that information to progressively refine its outputs. In particular, the method is applied to land cover segmentation, a classification task that aims to assign a type of land to each pixel in satellite images. The dataset and application serve to prove that the final uncertainty bound enables the end-user to reason about the possible errors in the segmentation result. Second, Recurrent Neural Networks are used as a method to create robust models towards lacking datasets, both in terms of size and class balance. We apply them to two different fields, road extraction in satellite images and Wireless Capsule Endoscopy (WCE). The former demonstrates that contextual information in the temporal axis of data can be used to create models that achieve comparable results to state-of-the-art while being less complex. The latter, in turn, proves that contextual information for polyp detection can be crucial to obtain models that generalize better and obtain higher performance. Last, we propose two methods to leverage unlabeled data in the model creation process. Often datasets are easier to obtain than to label, which results in many wasted opportunities with traditional classification approaches. Our approaches based on self-supervised learning result in a novel contrastive loss that is capable of extracting meaningful information out of pseudo-labeled data. Applying both methods to WCE data proves that the extracted inherent knowledge creates models that perform better in extremely unbalanced datasets and with lack of data. To summarize, this thesis demonstrates potential solutions to obtain uncertainty bounds, provide reasonable explanations of the outputs, and to combat lack of data or unbalanced datasets. Overall, the presented methods have a positive impact on the DL field and could have a real and tangible effect for the society.[cat] És innegable que el Deep Learning ha causat una revolució en molts aspectes no solament de l’aprenentatge automàtic però també de les nostres vides diàries. Tot i així, encara queden aspectes a millorar. Les xarxes neuronals tenen problemes per estimar la seva confiança en les prediccions, i sovint reporten probabilitats altes en casos que no tenen relació amb el model o que directament no tenen sentit. De la mateixa forma, interpretar els resultats d’un model profund i complex resulta una tasca extremadament complicada. Aquests mateixos models, cada cop amb més paràmetres i més potents, requereixen també de dades més ben etiquetades i més completes. Tenint en compte aquestes limitacions, l’objectiu principal és el de buscar mètodes i algoritmes per trobar-ne solució. Primerament, es proposa la creació d’un mètode capaç d’obtenir incertesa en imatges satèl·lit i d’utilitzar-la per crear models més robustos i resultats interpretables. En segon lloc, s’utilitzen Recurrent Neural Networks (RNN) per combatre la falta de dades mitjançant l’obtenció d’informació contextual de dades temporals. Aquestes s’apliquen per l’extracció de carreteres d’imatges satèl·lit i per la classificació de pòlips en imatges obtingudes amb Wireless Capsule Endoscopy (WCE). Finalment, es plantegen dos mètodes per tractar amb la falta de dades etiquetades i desbalancejos en les classes amb l’ús de Self-supervised Learning (SSL). Seqüències no etiquetades d’imatges d’intestins s’incorporen en el models en una fase prèvia a la classificació tradicional. Aquesta tesi demostra que les solucions proposades per obtenir mesures d’incertesa són efectives per donar explicacions raonables i interpretables sobre els resultats. Igualment, es prova que el context en dades de caràcter temporal, obtingut amb RNNs, serveix per obtenir models més simples que poden arribar a solucionar els problemes derivats de la falta de dades. Per últim, es mostra que SSL serveix per combatre de forma efectiva els problemes de generalització degut a dades no balancejades en diversos dominis de WCE. Concloem que aquesta tesi presenta mètodes amb un impacte real en diversos aspectes de DL a la vegada que demostra la capacitat de tenir un impacte positiu en la societat

    Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions

    Full text link
    Medical Image Analysis is currently experiencing a paradigm shift due to Deep Learning. This technology has recently attracted so much interest of the Medical Imaging community that it led to a specialized conference in `Medical Imaging with Deep Learning' in the year 2018. This article surveys the recent developments in this direction, and provides a critical review of the related major aspects. We organize the reviewed literature according to the underlying Pattern Recognition tasks, and further sub-categorize it following a taxonomy based on human anatomy. This article does not assume prior knowledge of Deep Learning and makes a significant contribution in explaining the core Deep Learning concepts to the non-experts in the Medical community. Unique to this study is the Computer Vision/Machine Learning perspective taken on the advances of Deep Learning in Medical Imaging. This enables us to single out `lack of appropriately annotated large-scale datasets' as the core challenge (among other challenges) in this research direction. We draw on the insights from the sister research fields of Computer Vision, Pattern Recognition and Machine Learning etc.; where the techniques of dealing with such challenges have already matured, to provide promising directions for the Medical Imaging community to fully harness Deep Learning in the future
    corecore