143 research outputs found

    Hierarchical Vision Transformers for Context-Aware Prostate Cancer Grading in Whole Slide Images

    Full text link
    Vision Transformers (ViTs) have ushered in a new era in computer vision, showcasing unparalleled performance in many challenging tasks. However, their practical deployment in computational pathology has largely been constrained by the sheer size of whole slide images (WSIs), which result in lengthy input sequences. Transformers faced a similar limitation when applied to long documents, and Hierarchical Transformers were introduced to circumvent it. Given the analogous challenge with WSIs and their inherent hierarchical structure, Hierarchical Vision Transformers (H-ViTs) emerge as a promising solution in computational pathology. This work delves into the capabilities of H-ViTs, evaluating their efficiency for prostate cancer grading in WSIs. Our results show that they achieve competitive performance against existing state-of-the-art solutions.Comment: Accepted at Medical Imaging meets NeurIPS 2023 worksho

    Comparison of Consecutive and Re-stained Sections for Image Registration in Histopathology

    Full text link
    Purpose: In digital histopathology, virtual multi-staining is important for diagnosis and biomarker research. Additionally, it provides accurate ground-truth for various deep-learning tasks. Virtual multi-staining can be obtained using different stains for consecutive sections or by re-staining the same section. Both approaches require image registration to compensate tissue deformations, but little attention has been devoted to comparing their accuracy. Approach: We compare variational image registration of consecutive and re-stained sections and analyze the effect of the image resolution which influences accuracy and required computational resources. We present a new hybrid dataset of re-stained and consecutive sections (HyReCo, 81 slide pairs, approx. 3000 landmarks) that we made publicly available and compare its image registration results to the automatic non-rigid histological image registration (ANHIR) challenge data (230 consecutive slide pairs). Results: We obtain a median landmark error after registration of 7.1 {\mu}m (HyReCo) and 16.0 {\mu}m (ANHIR) between consecutive sections. Between re-stained sections, the median registration error is 2.3 {\mu}m and 0.9 {\mu}m in the two subsets of the HyReCo dataset. We observe that deformable registration leads to lower landmark errors than affine registration in both cases, though the effect is smaller in re-stained sections. Conclusion: Deformable registration of consecutive and re-stained sections is a valuable tool for the joint analysis of different stains. Significance: While the registration of re-stained sections allows nucleus-level alignment which allows for a direct analysis of interacting biomarkers, consecutive sections only allow the transfer of region-level annotations. The latter can be achieved at low computational cost using coarser image resolutions.Comment: submitted, data available at https://dx.doi.org/10.21227/pzj5-bs6

    Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology

    Full text link
    Stain variation is a phenomenon observed when distinct pathology laboratories stain tissue slides that exhibit similar but not identical color appearance. Due to this color shift between laboratories, convolutional neural networks (CNNs) trained with images from one lab often underperform on unseen images from the other lab. Several techniques have been proposed to reduce the generalization error, mainly grouped into two categories: stain color augmentation and stain color normalization. The former simulates a wide variety of realistic stain variations during training, producing stain-invariant CNNs. The latter aims to match training and test color distributions in order to reduce stain variation. For the first time, we compared some of these techniques and quantified their effect on CNN classification performance using a heterogeneous dataset of hematoxylin and eosin histopathology images from 4 organs and 9 pathology laboratories. Additionally, we propose a novel unsupervised method to perform stain color normalization using a neural network. Based on our experimental results, we provide practical guidelines on how to use stain color augmentation and stain color normalization in future computational pathology applications.Comment: Accepted in the Medical Image Analysis journa

    HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images

    Full text link
    We propose HookNet, a semantic segmentation model for histopathology whole-slide images, which combines context and details via multiple branches of encoder-decoder convolutional neural networks. Concentricpatches at multiple resolutions with different fields of view are used to feed different branches of HookNet, and intermediate representations are combined via a hooking mechanism. We describe a framework to design and train HookNet for achieving high-resolution semantic segmentation and introduce constraints to guarantee pixel-wise alignment in feature maps during hooking. We show the advantages of using HookNet in two histopathology image segmentation tasks where tissue type prediction accuracy strongly depends on contextual information, namely (1) multi-class tissue segmentation in breast cancer and, (2) segmentation of tertiary lymphoid structures and germinal centers in lung cancer. Weshow the superiority of HookNet when compared with single-resolution U-Net models working at different resolutions as well as with a recently published multi-resolution model for histopathology image segmentatio

    Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images

    Full text link
    Automated classification of histopathological whole-slide images (WSI) of breast tissue requires analysis at very high resolutions with a large contextual area. In this paper, we present context-aware stacked convolutional neural networks (CNN) for classification of breast WSIs into normal/benign, ductal carcinoma in situ (DCIS), and invasive ductal carcinoma (IDC). We first train a CNN using high pixel resolution patches to capture cellular level information. The feature responses generated by this model are then fed as input to a second CNN, stacked on top of the first. Training of this stacked architecture with large input patches enables learning of fine-grained (cellular) details and global interdependence of tissue structures. Our system is trained and evaluated on a dataset containing 221 WSIs of H&E stained breast tissue specimens. The system achieves an AUC of 0.962 for the binary classification of non-malignant and malignant slides and obtains a three class accuracy of 81.3% for classification of WSIs into normal/benign, DCIS, and IDC, demonstrating its potentials for routine diagnostics

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Comparison of Different Methods for Tissue Segmentation in Histopathological Whole-Slide Images

    Full text link
    Tissue segmentation is an important pre-requisite for efficient and accurate diagnostics in digital pathology. However, it is well known that whole-slide scanners can fail in detecting all tissue regions, for example due to the tissue type, or due to weak staining because their tissue detection algorithms are not robust enough. In this paper, we introduce two different convolutional neural network architectures for whole slide image segmentation to accurately identify the tissue sections. We also compare the algorithms to a published traditional method. We collected 54 whole slide images with differing stains and tissue types from three laboratories to validate our algorithms. We show that while the two methods do not differ significantly they outperform their traditional counterpart (Jaccard index of 0.937 and 0.929 vs. 0.870, p < 0.01).Comment: Accepted for poster presentation at the IEEE International Symposium on Biomedical Imaging (ISBI) 201
    • …
    corecore