161 research outputs found
Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images
Automated classification of histopathological whole-slide images (WSI) of
breast tissue requires analysis at very high resolutions with a large
contextual area. In this paper, we present context-aware stacked convolutional
neural networks (CNN) for classification of breast WSIs into normal/benign,
ductal carcinoma in situ (DCIS), and invasive ductal carcinoma (IDC). We first
train a CNN using high pixel resolution patches to capture cellular level
information. The feature responses generated by this model are then fed as
input to a second CNN, stacked on top of the first. Training of this stacked
architecture with large input patches enables learning of fine-grained
(cellular) details and global interdependence of tissue structures. Our system
is trained and evaluated on a dataset containing 221 WSIs of H&E stained breast
tissue specimens. The system achieves an AUC of 0.962 for the binary
classification of non-malignant and malignant slides and obtains a three class
accuracy of 81.3% for classification of WSIs into normal/benign, DCIS, and IDC,
demonstrating its potentials for routine diagnostics
Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology
Stain variation is a phenomenon observed when distinct pathology laboratories
stain tissue slides that exhibit similar but not identical color appearance.
Due to this color shift between laboratories, convolutional neural networks
(CNNs) trained with images from one lab often underperform on unseen images
from the other lab. Several techniques have been proposed to reduce the
generalization error, mainly grouped into two categories: stain color
augmentation and stain color normalization. The former simulates a wide variety
of realistic stain variations during training, producing stain-invariant CNNs.
The latter aims to match training and test color distributions in order to
reduce stain variation. For the first time, we compared some of these
techniques and quantified their effect on CNN classification performance using
a heterogeneous dataset of hematoxylin and eosin histopathology images from 4
organs and 9 pathology laboratories. Additionally, we propose a novel
unsupervised method to perform stain color normalization using a neural
network. Based on our experimental results, we provide practical guidelines on
how to use stain color augmentation and stain color normalization in future
computational pathology applications.Comment: Accepted in the Medical Image Analysis journa
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
Hierarchical Vision Transformers for Context-Aware Prostate Cancer Grading in Whole Slide Images
Vision Transformers (ViTs) have ushered in a new era in computer vision,
showcasing unparalleled performance in many challenging tasks. However, their
practical deployment in computational pathology has largely been constrained by
the sheer size of whole slide images (WSIs), which result in lengthy input
sequences. Transformers faced a similar limitation when applied to long
documents, and Hierarchical Transformers were introduced to circumvent it.
Given the analogous challenge with WSIs and their inherent hierarchical
structure, Hierarchical Vision Transformers (H-ViTs) emerge as a promising
solution in computational pathology. This work delves into the capabilities of
H-ViTs, evaluating their efficiency for prostate cancer grading in WSIs. Our
results show that they achieve competitive performance against existing
state-of-the-art solutions.Comment: Accepted at Medical Imaging meets NeurIPS 2023 worksho
Uncertainty-guided annotation enhances segmentation with the human-in-the-loop
Deep learning algorithms, often critiqued for their 'black box' nature,
traditionally fall short in providing the necessary transparency for trusted
clinical use. This challenge is particularly evident when such models are
deployed in local hospitals, encountering out-of-domain distributions due to
varying imaging techniques and patient-specific pathologies. Yet, this
limitation offers a unique avenue for continual learning. The
Uncertainty-Guided Annotation (UGA) framework introduces a human-in-the-loop
approach, enabling AI to convey its uncertainties to clinicians, effectively
acting as an automated quality control mechanism. UGA eases this interaction by
quantifying uncertainty at the pixel level, thereby revealing the model's
limitations and opening the door for clinician-guided corrections. We evaluated
UGA on the Camelyon dataset for lymph node metastasis segmentation which
revealed that UGA improved the Dice coefficient (DC), from 0.66 to 0.76 by
adding 5 patches, and further to 0.84 with 10 patches. To foster broader
application and community contribution, we have made our code accessible a
- …
