15 research outputs found
Localization of diagnostically relevant regions of interest in whole slide images
Whole slide imaging technology enables pathologists to screen biopsy images and make a diagnosis in a digital form. This creates an opportunity to understand the screening patterns of expert pathologists and extract the patterns that lead to accurate and efficient diagnoses. For this purpose, we are taking the first step to interpret the recorded actions of world-class expert pathologists on a set of digitized breast biopsy images. We propose an algorithm to extract regions of interest from the logs of image screenings using zoom levels, time and the magnitude of panning motion. Using diagnostically relevant regions marked by experts, we use the visual bag-of-words model with texture and color features to describe these regions and train probabilistic classifiers to predict similar regions of interest in new whole slide images. The proposed algorithm gives promising results for detecting diagnostically relevant regions. We hope this attempt to predict the regions that attract pathologists' attention will provide the first step in a more comprehensive study to understand the diagnostic patterns in histopathology. © 2014 IEEE
Localization of Diagnostically Relevant Regions of Interest in Whole Slide Images: a Comparative Study
Whole slide digital imaging technology enables researchers to study pathologists’ interpretive behavior as they view digital slides and gain new understanding of the diagnostic medical decision-making process. In this study, we propose a simple yet important analysis to extract diagnostically relevant regions of interest (ROIs) from tracking records using only pathologists’ actions as they viewed biopsy specimens in the whole slide digital imaging format (zooming, panning, and fixating). We use these extracted regions in a visual bag-of-words model based on color and texture features to predict diagnostically relevant ROIs on whole slide images. Using a logistic regression classifier in a cross-validation setting on 240 digital breast biopsy slides and viewport tracking logs of three expert pathologists, we produce probability maps that show 74 % overlap with the actual regions at which pathologists looked. We compare different bag-of-words models by changing dictionary size, visual word definition (patches vs. superpixels), and training data (automatically extracted ROIs vs. manually marked ROIs). This study is a first step in understanding the scanning behaviors of pathologists and the underlying reasons for diagnostic errors. © 2016, Society for Imaging Informatics in Medicine
Evaluation of Joint Multi-Instance Multi-Label Learning For Breast Cancer Diagnosis
Multi-instance multi-label (MIML) learning is a challenging problem in many
aspects. Such learning approaches might be useful for many medical diagnosis
applications including breast cancer detection and classification. In this
study subset of digiPATH dataset (whole slide digital breast cancer
histopathology images) are used for training and evaluation of six
state-of-the-art MIML methods.
At the end, performance comparison of these approaches are given by means of
effective evaluation metrics. It is shown that MIML-kNN achieve the best
performance that is %65.3 average precision, where most of other methods attain
acceptable results as well
Learning to Segment Breast Biopsy Whole Slide Images
We trained and applied an encoder-decoder model to semantically segment
breast biopsy images into biologically meaningful tissue labels. Since
conventional encoder-decoder networks cannot be applied directly on large
biopsy images and the different sized structures in biopsies present novel
challenges, we propose four modifications: (1) an input-aware encoding block to
compensate for information loss, (2) a new dense connection pattern between
encoder and decoder, (3) dense and sparse decoders to combine multi-level
features, (4) a multi-resolution network that fuses the results of
encoder-decoders run on different resolutions. Our model outperforms a
feature-based approach and conventional encoder-decoders from the literature.
We use semantic segmentations produced with our model in an automated diagnosis
task and obtain higher accuracies than a baseline approach that employs an SVM
for feature-based segmentation, both using the same segmentation-based
diagnostic features.Comment: Added more WSI images in appendi
Machine learning methods for histopathological image analysis
Abundant accumulation of digital histopathological images has led to the
increased demand for their analysis, such as computer-aided diagnosis using
machine learning techniques. However, digital pathological images and related
tasks have some issues to be considered. In this mini-review, we introduce the
application of digital pathological image analysis using machine learning
algorithms, address some problems specific to such analysis, and propose
possible solutions.Comment: 23 pages, 4 figure
Capturing Global Spatial Context for Accurate Cell Classification in Skin Cancer Histology
The spectacular response observed in clinical trials of immunotherapy in
patients with previously uncurable Melanoma, a highly aggressive form of skin
cancer, calls for a better understanding of the cancer-immune interface.
Computational pathology provides a unique opportunity to spatially dissect such
interface on digitised pathological slides. Accurate cellular classification is
a key to ensure meaningful results, but is often challenging even with
state-of-art machine learning and deep learning methods.
We propose a hierarchical framework, which mirrors the way pathologists
perceive tumour architecture and define tumour heterogeneity to improve cell
classification methods that rely solely on cell nuclei morphology. The SLIC
superpixel algorithm was used to segment and classify tumour regions in low
resolution H&E-stained histological images of melanoma skin cancer to provide a
global context. Classification of superpixels into tumour, stroma, epidermis
and lumen/white space, yielded a 97.7% training set accuracy and 95.7% testing
set accuracy in 58 whole-tumour images of the TCGA melanoma dataset. The
superpixel classification was projected down to high resolution images to
enhance the performance of a single cell classifier, based on cell nuclear
morphological features, and resulted in increasing its accuracy from 86.4% to
91.6%. Furthermore, a voting scheme was proposed to use global context as
biological a priori knowledge, pushing the accuracy further to 92.8%.
This study demonstrates how using the global spatial context can accurately
characterise the tumour microenvironment and allow us to extend significantly
beyond single-cell morphological classification.Comment: Accepted by MICCAI COMPAY 2018 worksho
Are you sure it's an artifact?:Artifact detection and uncertainty quantification in histological images
Modern cancer diagnostics involves extracting tissue specimens from suspicious areas and conducting histotechnical procedures to prepare a digitized glass slide, called Whole Slide Image (WSI), for further examination. These procedures frequently introduce different types of artifacts in the obtained WSI, and histological artifacts might influence Computational Pathology (CPATH) systems further down to a diagnostic pipeline if not excluded or handled. Deep Convolutional Neural Networks (DCNNs) have achieved promising results for the detection of some WSI artifacts, however, they do not incorporate uncertainty in their predictions. This paper proposes an uncertainty-aware Deep Kernel Learning (DKL) model to detect blurry areas and folded tissues, two types of artifacts that can appear in WSIs. The proposed probabilistic model combines a CNN feature extractor and a sparse Gaussian Processes (GPs) classifier, which improves the performance of current state-of-the-art artifact detection DCNNs and provides uncertainty estimates. We achieved 0.996 and 0.938 F1 scores for blur and folded tissue detection on unseen data, respectively. In extensive experiments, we validated the DKL model on unseen data from external independent cohorts with different staining and tissue types, where it outperformed DCNNs. Interestingly, the DKL model is more confident in the correct predictions and less in the wrong ones. The proposed DKL model can be integrated into the preprocessing pipeline of CPATH systems to provide reliable predictions and possibly serve as a quality control tool.</p
Are you sure it's an artifact?:Artifact detection and uncertainty quantification in histological images
Modern cancer diagnostics involves extracting tissue specimens from suspicious areas and conducting histotechnical procedures to prepare a digitized glass slide, called Whole Slide Image (WSI), for further examination. These procedures frequently introduce different types of artifacts in the obtained WSI, and histological artifacts might influence Computational Pathology (CPATH) systems further down to a diagnostic pipeline if not excluded or handled. Deep Convolutional Neural Networks (DCNNs) have achieved promising results for the detection of some WSI artifacts, however, they do not incorporate uncertainty in their predictions. This paper proposes an uncertainty-aware Deep Kernel Learning (DKL) model to detect blurry areas and folded tissues, two types of artifacts that can appear in WSIs. The proposed probabilistic model combines a CNN feature extractor and a sparse Gaussian Processes (GPs) classifier, which improves the performance of current state-of-the-art artifact detection DCNNs and provides uncertainty estimates. We achieved 0.996 and 0.938 F1 scores for blur and folded tissue detection on unseen data, respectively. In extensive experiments, we validated the DKL model on unseen data from external independent cohorts with different staining and tissue types, where it outperformed DCNNs. Interestingly, the DKL model is more confident in the correct predictions and less in the wrong ones. The proposed DKL model can be integrated into the preprocessing pipeline of CPATH systems to provide reliable predictions and possibly serve as a quality control tool.</p