181 research outputs found
Machine learning methods for histopathological image analysis
Abundant accumulation of digital histopathological images has led to the
increased demand for their analysis, such as computer-aided diagnosis using
machine learning techniques. However, digital pathological images and related
tasks have some issues to be considered. In this mini-review, we introduce the
application of digital pathological image analysis using machine learning
algorithms, address some problems specific to such analysis, and propose
possible solutions.Comment: 23 pages, 4 figure
Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images
Automated classification of histopathological whole-slide images (WSI) of
breast tissue requires analysis at very high resolutions with a large
contextual area. In this paper, we present context-aware stacked convolutional
neural networks (CNN) for classification of breast WSIs into normal/benign,
ductal carcinoma in situ (DCIS), and invasive ductal carcinoma (IDC). We first
train a CNN using high pixel resolution patches to capture cellular level
information. The feature responses generated by this model are then fed as
input to a second CNN, stacked on top of the first. Training of this stacked
architecture with large input patches enables learning of fine-grained
(cellular) details and global interdependence of tissue structures. Our system
is trained and evaluated on a dataset containing 221 WSIs of H&E stained breast
tissue specimens. The system achieves an AUC of 0.962 for the binary
classification of non-malignant and malignant slides and obtains a three class
accuracy of 81.3% for classification of WSIs into normal/benign, DCIS, and IDC,
demonstrating its potentials for routine diagnostics
Towards a Visual-Language Foundation Model for Computational Pathology
The accelerated adoption of digital pathology and advances in deep learning
have enabled the development of powerful models for various pathology tasks
across a diverse array of diseases and patient cohorts. However, model training
is often difficult due to label scarcity in the medical domain and the model's
usage is limited by the specific task and disease for which it is trained.
Additionally, most models in histopathology leverage only image data, a stark
contrast to how humans teach each other and reason about histopathologic
entities. We introduce CONtrastive learning from Captions for Histopathology
(CONCH), a visual-language foundation model developed using diverse sources of
histopathology images, biomedical text, and notably over 1.17 million
image-caption pairs via task-agnostic pretraining. Evaluated on a suite of 13
diverse benchmarks, CONCH can be transferred to a wide range of downstream
tasks involving either or both histopathology images and text, achieving
state-of-the-art performance on histology image classification, segmentation,
captioning, text-to-image and image-to-text retrieval. CONCH represents a
substantial leap over concurrent visual-language pretrained systems for
histopathology, with the potential to directly facilitate a wide array of
machine learning-based workflows requiring minimal or no further supervised
fine-tuning
A Multi-resolution Model for Histopathology Image Classification and Localization with Multiple Instance Learning
Histopathological images provide rich information for disease diagnosis.
Large numbers of histopathological images have been digitized into high
resolution whole slide images, opening opportunities in developing
computational image analysis tools to reduce pathologists' workload and
potentially improve inter- and intra- observer agreement. Most previous work on
whole slide image analysis has focused on classification or segmentation of
small pre-selected regions-of-interest, which requires fine-grained annotation
and is non-trivial to extend for large-scale whole slide analysis. In this
paper, we proposed a multi-resolution multiple instance learning model that
leverages saliency maps to detect suspicious regions for fine-grained grade
prediction. Instead of relying on expensive region- or pixel-level annotations,
our model can be trained end-to-end with only slide-level labels. The model is
developed on a large-scale prostate biopsy dataset containing 20,229 slides
from 830 patients. The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for
benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group >= 2)
prediction, an area under the receiver operating characteristic curve (AUROC)
of 98.2% and an average precision (AP) of 97.4% for differentiating malignant
and benign slides. The model obtained an AUROC of 99.4% and an AP of 99.8% for
cancer detection on an external dataset.Comment: 9 pages, 6 figure
3E-Net: Entropy-Based Elastic Ensemble of Deep Convolutional Neural Networks for Grading of Invasive Breast Carcinoma Histopathological Microscopic Images
Automated grading systems using deep convolution neural networks (DCNNs) have proven their capability and potential to distinguish between different breast cancer grades using digitized histopathological images. In digital breast pathology, it is vital to measure how confident a DCNN is in grading using a machine-confidence metric, especially with the presence of major computer vision challenging problems such as the high visual variability of the images. Such a quantitative metric can be employed not only to improve the robustness of automated systems, but also to assist medical professionals in identifying complex cases. In this paper, we propose Entropy-based Elastic Ensemble of DCNN models (3E-Net) for grading invasive breast carcinoma microscopy images which provides an initial stage of explainability (using an uncertainty-aware mechanism adopting entropy). Our proposed model has been designed in a way to (1) exclude images that are less sensitive and highly uncertain to our ensemble model and (2) dynamically grade the non-excluded images using the certain models in the ensemble architecture. We evaluated two variations of 3E-Net on an invasive breast carcinoma dataset and we achieved grading accuracy of 96.15% and 99.50%
- …