1,642 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Semi-supervised learning towards automated segmentation of PET images with limited annotations: Application to lymphoma patients
The time-consuming task of manual segmentation challenges routine systematic
quantification of disease burden. Convolutional neural networks (CNNs) hold
significant promise to reliably identify locations and boundaries of tumors
from PET scans. We aimed to leverage the need for annotated data via
semi-supervised approaches, with application to PET images of diffuse large
B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL).
We analyzed 18F-FDG PET images of 292 patients with PMBCL (n=104) and DLBCL
(n=188) (n=232 for training and validation, and n=60 for external testing). We
employed FCM and MS losses for training a 3D U-Net with different levels of
supervision: i) fully supervised methods with labeled FCM (LFCM) as well as
Unified focal and Dice loss functions, ii) unsupervised methods with Robust FCM
(RFCM) and Mumford-Shah (MS) loss functions, and iii) Semi-supervised methods
based on FCM (RFCM+LFCM), as well as MS loss in combination with supervised
Dice loss (MS+Dice). Unified loss function yielded higher Dice score (mean +/-
standard deviation (SD)) (0.73 +/- 0.03; 95% CI, 0.67-0.8) compared to Dice
loss (p-value<0.01). Semi-supervised (RFCM+alpha*LFCM) with alpha=0.3 showed
the best performance, with a Dice score of 0.69 +/- 0.03 (95% CI, 0.45-0.77)
outperforming (MS+alpha*Dice) for any supervision level (any alpha) (p<0.01).
The best performer among (MS+alpha*Dice) semi-supervised approaches with
alpha=0.2 showed a Dice score of 0.60 +/- 0.08 (95% CI, 0.44-0.76) compared to
another supervision level in this semi-supervised approach (p<0.01).
Semi-supervised learning via FCM loss (RFCM+alpha*LFCM) showed improved
performance compared to supervised approaches. Considering the time-consuming
nature of expert manual delineations and intra-observer variabilities,
semi-supervised approaches have significant potential for automated
segmentation workflows
Weakly Supervised Learning with Automated Labels from Radiology Reports for Glioma Change Detection
Gliomas are the most frequent primary brain tumors in adults. Glioma change
detection aims at finding the relevant parts of the image that change over
time. Although Deep Learning (DL) shows promising performances in similar
change detection tasks, the creation of large annotated datasets represents a
major bottleneck for supervised DL applications in radiology. To overcome this,
we propose a combined use of weak labels (imprecise, but fast-to-create
annotations) and Transfer Learning (TL). Specifically, we explore inductive TL,
where source and target domains are identical, but tasks are different due to a
label shift: our target labels are created manually by three radiologists,
whereas our source weak labels are generated automatically from radiology
reports via NLP. We frame knowledge transfer as hyperparameter optimization,
thus avoiding heuristic choices that are frequent in related works. We
investigate the relationship between model size and TL, comparing a
low-capacity VGG with a higher-capacity ResNeXt model. We evaluate our models
on 1693 T2-weighted magnetic resonance imaging difference maps created from 183
patients, by classifying them into stable or unstable according to tumor
evolution. The weak labels extracted from radiology reports allowed us to
increase dataset size more than 3-fold, and improve VGG classification results
from 75% to 82% AUC. Mixed training from scratch led to higher performance than
fine-tuning or feature extraction. To assess generalizability, we ran inference
on an open dataset (BraTS-2015: 15 patients, 51 difference maps), reaching up
to 76% AUC. Overall, results suggest that medical imaging problems may benefit
from smaller models and different TL strategies with respect to computer vision
datasets, and that report-generated weak labels are effective in improving
model performances. Code, in-house dataset and BraTS labels are released.Comment: This work has been submitted as Original Paper to a Journa
Knowledge-Informed Machine Learning for Cancer Diagnosis and Prognosis: A review
Cancer remains one of the most challenging diseases to treat in the medical
field. Machine learning has enabled in-depth analysis of rich multi-omics
profiles and medical imaging for cancer diagnosis and prognosis. Despite these
advancements, machine learning models face challenges stemming from limited
labeled sample sizes, the intricate interplay of high-dimensionality data
types, the inherent heterogeneity observed among patients and within tumors,
and concerns about interpretability and consistency with existing biomedical
knowledge. One approach to surmount these challenges is to integrate biomedical
knowledge into data-driven models, which has proven potential to improve the
accuracy, robustness, and interpretability of model results. Here, we review
the state-of-the-art machine learning studies that adopted the fusion of
biomedical knowledge and data, termed knowledge-informed machine learning, for
cancer diagnosis and prognosis. Emphasizing the properties inherent in four
primary data types including clinical, imaging, molecular, and treatment data,
we highlight modeling considerations relevant to these contexts. We provide an
overview of diverse forms of knowledge representation and current strategies of
knowledge integration into machine learning pipelines with concrete examples.
We conclude the review article by discussing future directions to advance
cancer research through knowledge-informed machine learning.Comment: 41 pages, 4 figures, 2 table
Deep learning in medical imaging and radiation therapy
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions
Breast cancer has reached the highest incidence rate worldwide among all
malignancies since 2020. Breast imaging plays a significant role in early
diagnosis and intervention to improve the outcome of breast cancer patients. In
the past decade, deep learning has shown remarkable progress in breast cancer
imaging analysis, holding great promise in interpreting the rich information
and complex context of breast imaging modalities. Considering the rapid
improvement in the deep learning technology and the increasing severity of
breast cancer, it is critical to summarize past progress and identify future
challenges to be addressed. In this paper, we provide an extensive survey of
deep learning-based breast cancer imaging research, covering studies on
mammogram, ultrasound, magnetic resonance imaging, and digital pathology images
over the past decade. The major deep learning methods, publicly available
datasets, and applications on imaging-based screening, diagnosis, treatment
response prediction, and prognosis are described in detail. Drawn from the
findings of this survey, we present a comprehensive discussion of the
challenges and potential avenues for future research in deep learning-based
breast cancer imaging.Comment: Survey, 41 page
Deep Learning-Based Prediction of Molecular Tumor Biomarkers from H&E: A Practical Review
Molecular and genomic properties are critical in selecting cancer treatments
to target individual tumors, particularly for immunotherapy. However, the
methods to assess such properties are expensive, time-consuming, and often not
routinely performed. Applying machine learning to H&E images can provide a more
cost-effective screening method. Dozens of studies over the last few years have
demonstrated that a variety of molecular biomarkers can be predicted from H&E
alone using the advancements of deep learning: molecular alterations, genomic
subtypes, protein biomarkers, and even the presence of viruses. This article
reviews the diverse applications across cancer types and the methodology to
train and validate these models on whole slide images. From bottom-up to
pathologist-driven to hybrid approaches, the leading trends include a variety
of weakly supervised deep learning-based approaches, as well as mechanisms for
training strongly supervised models in select situations. While results of
these algorithms look promising, some challenges still persist, including small
training sets, rigorous validation, and model explainability. Biomarker
prediction models may yield a screening method to determine when to run
molecular tests or an alternative when molecular tests are not possible. They
also create new opportunities in quantifying intratumoral heterogeneity and
predicting patient outcomes.Comment: 20 pages, 2 figure
A review of artificial intelligence in prostate cancer detection on imaging
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care
- …