166 research outputs found
Recommended from our members
Collaborative analysis of multi-gigapixel imaging data using Cytomine
Motivation: Collaborative analysis of massive imaging datasets is essential to enable scientific discoveries.
Results: We developed Cytomine to foster active and distributed collaboration of multidisciplinary teams for large-scale image-based studies. It uses web development methodologies and machine learning in order to readily organize, explore, share and analyze (semantically and quantitatively) multi-gigapixel imaging data over the internet. We illustrate how it has been used in several biomedical applications
EXACT: a collaboration toolset for algorithm-aided annotation of images with annotation version control
In many research areas, scientific progress is accelerated by multidisciplinary access to image data and their interdisciplinary annotation. However, keeping track of these annotations to ensure a high-quality multi-purpose data set is a challenging and labour intensive task. We developed the open-source online platform EXACT (EXpert Algorithm Collaboration Tool) that enables the collaborative interdisciplinary analysis of images from different domains online and offline. EXACT supports multi-gigapixel medical whole slide images as well as image series with thousands of images. The software utilises a flexible plugin system that can be adapted to diverse applications such as counting mitotic figures with a screening mode, finding false annotations on a novel validation view, or using the latest deep learning image analysis technologies. This is combined with a version control system which makes it possible to keep track of changes in the data sets and, for example, to link the results of deep learning experiments to specific data set versions. EXACT is freely available and has already been successfully applied to a broad range of annotation tasks, including highly diverse applications like deep learning supported cytology scoring, interdisciplinary multi-centre whole slide image tumour annotation, and highly specialised whale sound spectroscopy clustering
Gigapixel Histopathological Image Analysis using Attention-based Neural Networks
Although CNNs are widely considered as the state-of-the-art models in various
applications of image analysis, one of the main challenges still open is the
training of a CNN on high resolution images. Different strategies have been
proposed involving either a rescaling of the image or an individual processing
of parts of the image. Such strategies cannot be applied to images, such as
gigapixel histopathological images, for which a high reduction in resolution
inherently effects a loss of discriminative information, and in respect of
which the analysis of single parts of the image suffers from a lack of global
information or implies a high workload in terms of annotating the training
images in such a way as to select significant parts. We propose a method for
the analysis of gigapixel histopathological images solely by using weak
image-level labels. In particular, two analysis tasks are taken into account: a
binary classification and a prediction of the tumor proliferation score. Our
method is based on a CNN structure consisting of a compressing path and a
learning path. In the compressing path, the gigapixel image is packed into a
grid-based feature map by using a residual network devoted to the feature
extraction of each patch into which the image has been divided. In the learning
path, attention modules are applied to the grid-based feature map, taking into
account spatial correlations of neighboring patch features to find regions of
interest, which are then used for the final whole slide analysis. Our method
integrates both global and local information, is flexible with regard to the
size of the input images and only requires weak image-level labels. Comparisons
with different methods of the state-of-the-art on two well known datasets,
Camelyon16 and TUPAC16, have been made to confirm the validity of the proposed
model.Comment: The manuscript was submitted to a peer-review journal on January 27t
Magnifying networks for histopathological images with billions of pixels
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature—which rely on the splitting of the original images into small patches—and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets—as well as the proposed optimization framework—in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.Peer reviewe
Weakly-Supervised Deep Learning Model for Prostate Cancer Diagnosis and Gleason Grading of Histopathology Images
Prostate cancer is the most common cancer in men worldwide and the second
leading cause of cancer death in the United States. One of the prognostic
features in prostate cancer is the Gleason grading of histopathology images.
The Gleason grade is assigned based on tumor architecture on Hematoxylin and
Eosin (H&E) stained whole slide images (WSI) by the pathologists. This process
is time-consuming and has known interobserver variability. In the past few
years, deep learning algorithms have been used to analyze histopathology
images, delivering promising results for grading prostate cancer. However, most
of the algorithms rely on the fully annotated datasets which are expensive to
generate. In this work, we proposed a novel weakly-supervised algorithm to
classify prostate cancer grades. The proposed algorithm consists of three
steps: (1) extracting discriminative areas in a histopathology image by
employing the Multiple Instance Learning (MIL) algorithm based on Transformers,
(2) representing the image by constructing a graph using the discriminative
patches, and (3) classifying the image into its Gleason grades by developing a
Graph Convolutional Neural Network (GCN) based on the gated attention
mechanism. We evaluated our algorithm using publicly available datasets,
including TCGAPRAD, PANDA, and Gleason 2019 challenge datasets. We also cross
validated the algorithm on an independent dataset. Results show that the
proposed model achieved state-of-the-art performance in the Gleason grading
task in terms of accuracy, F1 score, and cohen-kappa. The code is available at
https://github.com/NabaviLab/Prostate-Cancer
- …