934 research outputs found

    Attention Gated Networks: Learning to Leverage Salient Regions in Medical Images

    Get PDF
    We propose a novel attention gate (AG) model for medical image analysis that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules when using convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN models such as VGG or U-Net architectures with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed AG models are evaluated on a variety of tasks, including medical image classification and segmentation. For classification, we demonstrate the use case of AGs in scan plane detection for fetal ultrasound screening. We show that the proposed attention mechanism can provide efficient object localisation while improving the overall prediction performance by reducing false positives. For segmentation, the proposed architecture is evaluated on two large 3D CT abdominal datasets with manual annotations for multiple organs. Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency. Moreover, AGs guide the model activations to be focused around salient regions, which provides better insights into how model predictions are made. The source code for the proposed AG models is publicly available.Comment: Accepted for Medical Image Analysis (Special Issue on Medical Imaging with Deep Learning). arXiv admin note: substantial text overlap with arXiv:1804.03999, arXiv:1804.0533

    Integration of evidence across human and model organism studies: A meeting report.

    Get PDF
    The National Institute on Drug Abuse and Joint Institute for Biological Sciences at the Oak Ridge National Laboratory hosted a meeting attended by a diverse group of scientists with expertise in substance use disorders (SUDs), computational biology, and FAIR (Findability, Accessibility, Interoperability, and Reusability) data sharing. The meeting\u27s objective was to discuss and evaluate better strategies to integrate genetic, epigenetic, and \u27omics data across human and model organisms to achieve deeper mechanistic insight into SUDs. Specific topics were to (a) evaluate the current state of substance use genetics and genomics research and fundamental gaps, (b) identify opportunities and challenges of integration and sharing across species and data types, (c) identify current tools and resources for integration of genetic, epigenetic, and phenotypic data, (d) discuss steps and impediment related to data integration, and (e) outline future steps to support more effective collaboration-particularly between animal model research communities and human genetics and clinical research teams. This review summarizes key facets of this catalytic discussion with a focus on new opportunities and gaps in resources and knowledge on SUDs

    Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays

    Get PDF
    Artificial intelligence (AI)-generated clinical advice is becoming more prevalent in healthcare. However, the impact of AI-generated advice on physicians’ decision-making is underexplored. In this study, physicians received X-rays with correct diagnostic advice and were asked to make a diagnosis, rate the advice’s quality, and judge their own confidence. We manipulated whether the advice came with or without a visual annotation on the X-rays, and whether it was labeled as coming from an AI or a human radiologist. Overall, receiving annotated advice from an AI resulted in the highest diagnostic accuracy. Physicians rated the quality of AI advice higher than human advice. We did not find a strong effect of either manipulation on participants’ confidence. The magnitude of the effects varied between task experts and non-task experts, with the latter benefiting considerably from correct explainable AI advice. These findings raise important considerations for the deployment of diagnostic advice in healthcare

    Producing Decisions and Explanations: A Joint Approach Towards Explainable CNNs

    Get PDF
    Deep Learning models, in particular Convolutional Neural Networks, have become the state-of-the-art in different domains, such as image classification, object detection and other computer vision tasks. However, despite their overwhelming predictive performance, they are still, for the most part, considered black-boxes, making it difficult to understand the reasoning behind their outputted decisions. As such, and with the growing interest in deploying such models into real world scenarios, the need for explainable systems has arisen. Therefore, this dissertation tries to mitigate this growing need, by proposing a novel CNN architecture, composed of an explainer and a classifier. The network, trained end-to-end, constitutes an in-model explainability method, that not only outputs decisions as well as visual explanations of what the network is focusing on to produce such decisions

    Domain Generalization for Medical Image Analysis: A Survey

    Full text link
    Medical Image Analysis (MedIA) has become an essential tool in medicine and healthcare, aiding in disease diagnosis, prognosis, and treatment planning, and recent successes in deep learning (DL) have made significant contributions to its advances. However, DL models for MedIA remain challenging to deploy in real-world situations, failing for generalization under the distributional gap between training and testing samples, known as a distribution shift problem. Researchers have dedicated their efforts to developing various DL methods to adapt and perform robustly on unknown and out-of-distribution data distributions. This paper comprehensively reviews domain generalization studies specifically tailored for MedIA. We provide a holistic view of how domain generalization techniques interact within the broader MedIA system, going beyond methodologies to consider the operational implications on the entire MedIA workflow. Specifically, we categorize domain generalization methods into data-level, feature-level, model-level, and analysis-level methods. We show how those methods can be used in various stages of the MedIA workflow with DL equipped from data acquisition to model prediction and analysis. Furthermore, we include benchmark datasets and applications used to evaluate these approaches and analyze the strengths and weaknesses of various methods, unveiling future research opportunities

    Generative Adversarial Networks for Annotating Images of Otoliths

    Get PDF
    This thesis explores the use of generative adversarial networks (GANs) for annotating images of otoliths to determine the age of fish. The proposed solution not only provides accurate age determinations, but also visual representations of the otolith images with growth rings marked with dots, making it applicable as explainable artificial intelligence. The convolutional neural network models I propose are based on Pix2Pix GANs and Wasserstein GANs, with the latter showing the success in my experiments. The successful models achieve an accuracy of 82.8% and 81.5% in age determination, including an offset of plus-minus 2 from the real ages of the dataset.Masteroppgave i informatikkINF399MAMN-PROGMAMN-IN
    • …
    corecore