48 research outputs found

    Patch-based nonlinear image registration for gigapixel whole slide images

    Get PDF
    Producci贸n Cient铆ficaImage registration of whole slide histology images allows the fusion of fine-grained information-like different immunohistochemical stains-from neighboring tissue slides. Traditionally, pathologists fuse this information by looking subsequently at one slide at a time. If the slides are digitized and accurately aligned at cell level, automatic analysis can be used to ease the pathologist's work. However, the size of those images exceeds the memory capacity of regular computers. Methods: We address the challenge to combine a global motion model that takes the physical cutting process of the tissue into account with image data that is not simultaneously globally available. Typical approaches either reduce the amount of data to be processed or partition the data into smaller chunks to be processed separately. Our novel method first registers the complete images on a low resolution with a nonlinear deformation model and later refines this result on patches by using a second nonlinear registration on each patch. Finally, the deformations computed on all patches are combined by interpolation to form one globally smooth nonlinear deformation. The NGF distance measure is used to handle multistain images. Results: The method is applied to ten whole slide image pairs of human lung cancer data. The alignment of 85 corresponding structures is measured by comparing manual segmentations from neighboring slides. Their offset improves significantly, by at least 15%, compared to the low-resolution nonlinear registration. Conclusion/Significance: The proposed method significantly improves the accuracy of multistain registration which allows us to compare different antibodies at cell level

    Patch-Based Nonlinear Image Registration for Gigapixel Whole Slide Images

    Full text link

    Whole-Slide Mitosis Detection in H&E Breast Histology Using PHH3 as a Reference to Train Distilled Stain-Invariant Convolutional Networks

    Full text link
    Manual counting of mitotic tumor cells in tissue sections constitutes one of the strongest prognostic markers for breast cancer. This procedure, however, is time-consuming and error-prone. We developed a method to automatically detect mitotic figures in breast cancer tissue sections based on convolutional neural networks (CNNs). Application of CNNs to hematoxylin and eosin (H&E) stained histological tissue sections is hampered by: (1) noisy and expensive reference standards established by pathologists, (2) lack of generalization due to staining variation across laboratories, and (3) high computational requirements needed to process gigapixel whole-slide images (WSIs). In this paper, we present a method to train and evaluate CNNs to specifically solve these issues in the context of mitosis detection in breast cancer WSIs. First, by combining image analysis of mitotic activity in phosphohistone-H3 (PHH3) restained slides and registration, we built a reference standard for mitosis detection in entire H&E WSIs requiring minimal manual annotation effort. Second, we designed a data augmentation strategy that creates diverse and realistic H&E stain variations by modifying the hematoxylin and eosin color channels directly. Using it during training combined with network ensembling resulted in a stain invariant mitosis detector. Third, we applied knowledge distillation to reduce the computational requirements of the mitosis detection ensemble with a negligible loss of performance. The system was trained in a single-center cohort and evaluated in an independent multicenter cohort from The Cancer Genome Atlas on the three tasks of the Tumor Proliferation Assessment Challenge (TUPAC). We obtained a performance within the top-3 best methods for most of the tasks of the challenge.Comment: Accepted to appear in IEEE Transactions on Medical Imagin

    Learning where to see : a novel attention model for automated immunohistochemical scoring

    Get PDF
    Estimatingover-amplification of human epidermal growth factor receptor2 (HER2) on invasive breast cancer (BC) is regarded as a significant predictive and prognostic marker. We propose a novel deep reinforcement learning (DRL) based model that treats immunohistochemical (IHC) scoring of HER2 as a sequential learning task. For a given image tile sampled from multi-resolution giga-pixel whole slide image (WSI), the model learns to sequentially identify some of the diagnostically relevant regions of interest (ROIs) by following a parameterized policy. The selected ROIs are processed by recurrent and residual convolution networks to learn the discriminative features for different HER2 scores and predict the next location, without requiring to process all the subimage patches of a given tile for predicting the HER2 score, mimicking the histopathologist who would not usually analyse every part of the slide at the highest magnification. The proposed model incorporates a task-specific regularization term and inhibition of return mechanism to prevent the model from revisiting the previously attended locations. We evaluated our model on two IHC datasets: a publicly available dataset from the HER2 scoring challenge contest and another dataset consisting of WSIs of gastroenteropancreatic neuroendocrine tumor sections stained with Glo1 marker. We demonstrate that the proposed model out performs other methods based on state-of-the-art deep convolutional networks. To the best of our knowledge, this is the first study using DRL for IHC scoring and could potentially lead to wider use of DRL in the domain of computational pathology reducing the computational burden of the analysis of large multi-gigapixel histology images

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201
    corecore