74 research outputs found
Multi-Scale Relational Graph Convolutional Network for Multiple Instance Learning in Histopathology Images
Graph convolutional neural networks have shown significant potential in
natural and histopathology images. However, their use has only been studied in
a single magnification or multi-magnification with late fusion. In order to
leverage the multi-magnification information and early fusion with graph
convolutional networks, we handle different embedding spaces at each
magnification by introducing the Multi-Scale Relational Graph Convolutional
Network (MS-RGCN) as a multiple instance learning method. We model
histopathology image patches and their relation with neighboring patches and
patches at other scales (i.e., magnifications) as a graph. To pass the
information between different magnification embedding spaces, we define
separate message-passing neural networks based on the node and edge type. We
experiment on prostate cancer histopathology images to predict the grade groups
based on the extracted features from patches. We also compare our MS-RGCN with
multiple state-of-the-art methods with evaluations on several source and
held-out datasets. Our method outperforms the state-of-the-art on all of the
datasets and image types consisting of tissue microarrays, whole-mount slide
regions, and whole-slide images. Through an ablation study, we test and show
the value of the pertinent design features of the MS-RGCN
Weakly-Supervised Deep Learning Model for Prostate Cancer Diagnosis and Gleason Grading of Histopathology Images
Prostate cancer is the most common cancer in men worldwide and the second
leading cause of cancer death in the United States. One of the prognostic
features in prostate cancer is the Gleason grading of histopathology images.
The Gleason grade is assigned based on tumor architecture on Hematoxylin and
Eosin (H&E) stained whole slide images (WSI) by the pathologists. This process
is time-consuming and has known interobserver variability. In the past few
years, deep learning algorithms have been used to analyze histopathology
images, delivering promising results for grading prostate cancer. However, most
of the algorithms rely on the fully annotated datasets which are expensive to
generate. In this work, we proposed a novel weakly-supervised algorithm to
classify prostate cancer grades. The proposed algorithm consists of three
steps: (1) extracting discriminative areas in a histopathology image by
employing the Multiple Instance Learning (MIL) algorithm based on Transformers,
(2) representing the image by constructing a graph using the discriminative
patches, and (3) classifying the image into its Gleason grades by developing a
Graph Convolutional Neural Network (GCN) based on the gated attention
mechanism. We evaluated our algorithm using publicly available datasets,
including TCGAPRAD, PANDA, and Gleason 2019 challenge datasets. We also cross
validated the algorithm on an independent dataset. Results show that the
proposed model achieved state-of-the-art performance in the Gleason grading
task in terms of accuracy, F1 score, and cohen-kappa. The code is available at
https://github.com/NabaviLab/Prostate-Cancer
Label-Efficient Deep Learning in Medical Image Analysis: Challenges and Future Directions
Deep learning has seen rapid growth in recent years and achieved
state-of-the-art performance in a wide range of applications. However, training
models typically requires expensive and time-consuming collection of large
quantities of labeled data. This is particularly true within the scope of
medical imaging analysis (MIA), where data are limited and labels are expensive
to be acquired. Thus, label-efficient deep learning methods are developed to
make comprehensive use of the labeled data as well as the abundance of
unlabeled and weak-labeled data. In this survey, we extensively investigated
over 300 recent papers to provide a comprehensive overview of recent progress
on label-efficient learning strategies in MIA. We first present the background
of label-efficient learning and categorize the approaches into different
schemes. Next, we examine the current state-of-the-art methods in detail
through each scheme. Specifically, we provide an in-depth investigation,
covering not only canonical semi-supervised, self-supervised, and
multi-instance learning schemes, but also recently emerged active and
annotation-efficient learning strategies. Moreover, as a comprehensive
contribution to the field, this survey not only elucidates the commonalities
and unique features of the surveyed methods but also presents a detailed
analysis of the current challenges in the field and suggests potential avenues
for future research.Comment: Update Few-shot Method
An Aggregation of Aggregation Methods in Computational Pathology
Image analysis and machine learning algorithms operating on multi-gigapixel
whole-slide images (WSIs) often process a large number of tiles (sub-images)
and require aggregating predictions from the tiles in order to predict
WSI-level labels. In this paper, we present a review of existing literature on
various types of aggregation methods with a view to help guide future research
in the area of computational pathology (CPath). We propose a general CPath
workflow with three pathways that consider multiple levels and types of data
and the nature of computation to analyse WSIs for predictive modelling. We
categorize aggregation methods according to the context and representation of
the data, features of computational modules and CPath use cases. We compare and
contrast different methods based on the principle of multiple instance
learning, perhaps the most commonly used aggregation method, covering a wide
range of CPath literature. To provide a fair comparison, we consider a specific
WSI-level prediction task and compare various aggregation methods for that
task. Finally, we conclude with a list of objectives and desirable attributes
of aggregation methods in general, pros and cons of the various approaches,
some recommendations and possible future directions.Comment: 32 pages, 4 figure
MesoGraph: automatic profiling of mesothelioma subtypes from histological images
Mesothelioma is classified into three histological subtypes, epithelioid, sarcomatoid, and biphasic, according to the relative proportions of epithelioid and sarcomatoid tumor cells present. Current guidelines recommend that the sarcomatoid component of each mesothelioma is quantified, as a higher percentage of sarcomatoid pattern in biphasic mesothelioma shows poorer prognosis. In this work, we develop a dual-task graph neural network (GNN) architecture with ranking loss to learn a model capable of scoring regions of tissue down to cellular resolution. This allows quantitative profiling of a tumor sample according to the aggregate sarcomatoid association score. Tissue is represented by a cell graph with both cell-level morphological and regional features. We use an external multicentric test set from Mesobank, on which we demonstrate the predictive performance of our model. We additionally validate our model predictions through an analysis of the typical morphological features of cells according to their predicted score
MesoGraph: Automatic profiling of mesothelioma subtypes from histological images.
Mesothelioma is classified into three histological subtypes, epithelioid, sarcomatoid, and biphasic, according to the relative proportions of epithelioid and sarcomatoid tumor cells present. Current guidelines recommend that the sarcomatoid component of each mesothelioma is quantified, as a higher percentage of sarcomatoid pattern in biphasic mesothelioma shows poorer prognosis. In this work, we develop a dual-task graph neural network (GNN) architecture with ranking loss to learn a model capable of scoring regions of tissue down to cellular resolution. This allows quantitative profiling of a tumor sample according to the aggregate sarcomatoid association score. Tissue is represented by a cell graph with both cell-level morphological and regional features. We use an external multicentric test set from Mesobank, on which we demonstrate the predictive performance of our model. We additionally validate our model predictions through an analysis of the typical morphological features of cells according to their predicted score
- …