440 research outputs found
Recommended from our members
Medical Image Data and Datasets in the Era of Machine Learning-Whitepaper from the 2016 C-MIMI Meeting Dataset Session.
At the first annual Conference on Machine Intelligence in Medical Imaging (C-MIMI), held in September 2016, a conference session on medical image data and datasets for machine learning identified multiple issues. The common theme from attendees was that everyone participating in medical image evaluation with machine learning is data starved. There is an urgent need to find better ways to collect, annotate, and reuse medical imaging data. Unique domain issues with medical image datasets require further study, development, and dissemination of best practices and standards, and a coordinated effort among medical imaging domain experts, medical imaging informaticists, government and industry data scientists, and interested commercial, academic, and government entities. High-level attributes of reusable medical image datasets suitable to train, test, validate, verify, and regulate ML products should be better described. NIH and other government agencies should promote and, where applicable, enforce, access to medical image datasets. We should improve communication among medical imaging domain experts, medical imaging informaticists, academic clinical and basic science researchers, government and industry data scientists, and interested commercial entities
Generating semantically enriched diagnostics for radiological images using machine learning
Development of Computer Aided Diagnostic (CAD) tools to aid radiologists in pathology detection and decision making relies considerably on manually annotated images. With the advancement of deep learning techniques for CAD development, these expert annotations no longer need to be hand-crafted, however, deep learning algorithms require large amounts of data in order to generalise well. One way in which to access large volumes of expert-annotated data is through radiological exams consisting of images and reports. Using past radiological exams obtained from hospital archiving systems has many advantages: they are expert annotations available in large quantities, covering a population-representative variety of pathologies, and they provide additional context to pathology diagnoses, such as anatomical location and severity. Learning to auto-generate such reports from images presents many challenges such as the difficulty in representing and generating long, unstructured textual information, accounting for spelling errors and repetition or redundancy, and the inconsistency across different annotators. In this thesis, the problem of learning to automate disease detection from radiological exams is approached from three directions. Firstly, a report generation model is developed such that it is conditioned on radiological image features. Secondly, a number of approaches are explored aimed at extracting diagnostic information from free-text reports. Finally, an alternative approach to image latent space learning from current state-of-the-art is developed that can be applied to accelerated image acquisition.Open Acces
Artificial Intelligence and Interstitial Lung Disease: Diagnosis and Prognosis.
Interstitial lung disease (ILD) is now diagnosed by an ILD-board consisting of radiologists, pulmonologists, and pathologists. They discuss the combination of computed tomography (CT) images, pulmonary function tests, demographic information, and histology and then agree on one of the 200 ILD diagnoses. Recent approaches employ computer-aided diagnostic tools to improve detection of disease, monitoring, and accurate prognostication. Methods based on artificial intelligence (AI) may be used in computational medicine, especially in image-based specialties such as radiology. This review summarises and highlights the strengths and weaknesses of the latest and most significant published methods that could lead to a holistic system for ILD diagnosis. We explore current AI methods and the data use to predict the prognosis and progression of ILDs. It is then essential to highlight the data that holds the most information related to risk factors for progression, e.g., CT scans and pulmonary function tests. This review aims to identify potential gaps, highlight areas that require further research, and identify the methods that could be combined to yield more promising results in future studies
Variational Knowledge Distillation for Disease Classification in Chest X-Rays
Disease classification relying solely on imaging data attracts great interest
in medical image analysis. Current models could be further improved, however,
by also employing Electronic Health Records (EHRs), which contain rich
information on patients and findings from clinicians. It is challenging to
incorporate this information into disease classification due to the high
reliance on clinician input in EHRs, limiting the possibility for automated
diagnosis. In this paper, we propose \textit{variational knowledge
distillation} (VKD), which is a new probabilistic inference framework for
disease classification based on X-rays that leverages knowledge from EHRs.
Specifically, we introduce a conditional latent variable model, where we infer
the latent representation of the X-ray image with the variational posterior
conditioning on the associated EHR text. By doing so, the model acquires the
ability to extract the visual features relevant to the disease during learning
and can therefore perform more accurate classification for unseen patients at
inference based solely on their X-ray scans. We demonstrate the effectiveness
of our method on three public benchmark datasets with paired X-ray images and
EHRs. The results show that the proposed variational knowledge distillation can
consistently improve the performance of medical image classification and
significantly surpasses current methods
A Survey of the Impact of Self-Supervised Pretraining for Diagnostic Tasks with Radiological Images
Self-supervised pretraining has been observed to be effective at improving
feature representations for transfer learning, leveraging large amounts of
unlabelled data. This review summarizes recent research into its usage in
X-ray, computed tomography, magnetic resonance, and ultrasound imaging,
concentrating on studies that compare self-supervised pretraining to fully
supervised learning for diagnostic tasks such as classification and
segmentation. The most pertinent finding is that self-supervised pretraining
generally improves downstream task performance compared to full supervision,
most prominently when unlabelled examples greatly outnumber labelled examples.
Based on the aggregate evidence, recommendations are provided for practitioners
considering using self-supervised learning. Motivated by limitations identified
in current research, directions and practices for future study are suggested,
such as integrating clinical knowledge with theoretically justified
self-supervised learning methods, evaluating on public datasets, growing the
modest body of evidence for ultrasound, and characterizing the impact of
self-supervised pretraining on generalization.Comment: 32 pages, 6 figures, a literature survey submitted to BMC Medical
Imagin
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
With an increase in deep learning-based methods, the call for explainability
of such methods grows, especially in high-stakes decision making areas such as
medical image analysis. This survey presents an overview of eXplainable
Artificial Intelligence (XAI) used in deep learning-based medical image
analysis. A framework of XAI criteria is introduced to classify deep
learning-based medical image analysis methods. Papers on XAI techniques in
medical image analysis are then surveyed and categorized according to the
framework and according to anatomical location. The paper concludes with an
outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho
Towards long-tailed, multi-label disease classification from chest X-ray: Overview of the CXR-LT challenge
Many real-world image recognition problems, such as diagnostic medical
imaging exams, are "long-tailed" \unicode{x2013} there are a few common
findings followed by many more relatively rare conditions. In chest
radiography, diagnosis is both a long-tailed and multi-label problem, as
patients often present with multiple findings simultaneously. While researchers
have begun to study the problem of long-tailed learning in medical image
recognition, few have studied the interaction of label imbalance and label
co-occurrence posed by long-tailed, multi-label disease classification. To
engage with the research community on this emerging topic, we conducted an open
challenge, CXR-LT, on long-tailed, multi-label thorax disease classification
from chest X-rays (CXRs). We publicly release a large-scale benchmark dataset
of over 350,000 CXRs, each labeled with at least one of 26 clinical findings
following a long-tailed distribution. We synthesize common themes of
top-performing solutions, providing practical recommendations for long-tailed,
multi-label medical image classification. Finally, we use these insights to
propose a path forward involving vision-language foundation models for few- and
zero-shot disease classification
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
- …