340 research outputs found

    Computer-aided diagnosis through medical image retrieval in radiology.

    Get PDF
    Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability

    Interpretable Alzheimer's Disease Classification Via a Contrastive Diffusion Autoencoder

    Full text link
    In visual object classification, humans often justify their choices by comparing objects to prototypical examples within that class. We may therefore increase the interpretability of deep learning models by imbuing them with a similar style of reasoning. In this work, we apply this principle by classifying Alzheimer's Disease based on the similarity of images to training examples within the latent space. We use a contrastive loss combined with a diffusion autoencoder backbone, to produce a semantically meaningful latent space, such that neighbouring latents have similar image-level features. We achieve a classification accuracy comparable to black box approaches on a dataset of 2D MRI images, whilst producing human interpretable model explanations. Therefore, this work stands as a contribution to the pertinent development of accurate and interpretable deep learning within medical imaging

    Explainable Machine Learning for Robust Modelling in Healthcare

    Get PDF
    Deep Learning (DL) has seen an unprecedented rise in popularity over the last decade, with applications ranging from machine translation to self-driving cars. This includes extensive work in sensitive domains such as healthcare and finance with, for example, models recently achieving better-than-human performance in tasks such as chest x-ray diagnosis. However, despite these impressive results there are relatively few real-world deployments of DL models in sensitive scenarios, with experts claiming this is due to a lack of model transparency, reproducibility, robustness and privacy; this is in spite of numerous techniques having been proposed to address these issues. Most notably is the development of Explainable Deep Learning techniques, which aim to compute feature importance values for a given input (i.e. which features does a model use to make its decision?) - such methods can greatly improve the transparency of a model, but have little impact on reproducibility, robustness and privacy. In this thesis, I explore how explainability techniques can be used to address these issues, by using feature attributions to improve our understanding of how model parameters change during training, and across different hyperparameter setups. Through the introduction of a novel model architecture and training technique that used model explanations to improve model consistency, I show how explanations can improve privacy, robustness and reproducibility. Extensive experimentation is carried out across a number of sensitive datasets from healthcare and bioinformatics in both traditional and federated learning settings show that these techniques have a significant impact on the quality of these models. I discuss the impact these results could have on real-world applications of deep learning, due to the issues addressed by the proposed techniques, and present some ideas for further research in this area

    Levels of explicability for medical artificial intelligence: what do we normatively need and what can we technically reach?

    Get PDF
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI? Arguments We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example. Conclusion We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements
    • …
    corecore