460 research outputs found

    Computational Histological Staining and Destaining of Prostate Core Biopsy RGB Images with Generative Adversarial Neural Networks

    Full text link
    Histopathology tissue samples are widely available in two states: paraffin-embedded unstained and non-paraffin-embedded stained whole slide RGB images (WSRI). Hematoxylin and eosin stain (H&E) is one of the principal stains in histology but suffers from several shortcomings related to tissue preparation, staining protocols, slowness and human error. We report two novel approaches for training machine learning models for the computational H&E staining and destaining of prostate core biopsy RGB images. The staining model uses a conditional generative adversarial network that learns hierarchical non-linear mappings between whole slide RGB image (WSRI) pairs of prostate core biopsy before and after H&E staining. The trained staining model can then generate computationally H&E-stained prostate core WSRIs using previously unseen non-stained biopsy images as input. The destaining model, by learning mappings between an H&E stained WSRI and a non-stained WSRI of the same biopsy, can computationally destain previously unseen H&E-stained images. Structural and anatomical details of prostate tissue and colors, shapes, geometries, locations of nuclei, stroma, vessels, glands and other cellular components were generated by both models with structural similarity indices of 0.68 (staining) and 0.84 (destaining). The proposed staining and destaining models can engender computational H&E staining and destaining of WSRI biopsies without additional equipment and devices.Comment: Accepted for publication at 2018 IEEE International Conference on Machine Learning and Applications (ICMLA

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    Get PDF
    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi

    Learning Deep Neural Networks for Enhanced Prostate Histological Image Analysis

    Get PDF
    In recent years, deep convolutional neural networks (CNNs) have shown promise for improving prostate cancer diagnosis by enabling quantitative histopathology through digital pathology. However, there are a number of factors that limit the widespread adoption and clinical utility of deep learning for digital pathology. One of these limitations is the requirement for large labelled training datasets which are expensive to construct due to limited availability of the requisite expertise. Additionally, digital pathology applications typically require the digitisation of histological slides at high magnifications. This process can be challenging especially when digitising large histological slides such as prostatectomies. This work studies and addresses these issues in two important applications of digital pathology: prostate nuclei detection and cell type classification. We study the performance of CNNs at different magnifications and demonstrate that it is possible to perform nuclei detection in low magnification prostate histopathology using CNNs with minimal loss in accuracy. We then study the training of prostate nuclei detectors in the small data setting and demonstrate that although it is possible to train nuclei detectors with minimal data, the models will be sensitive to hyperparameter choice and therefore may not generalise well. Instead, we show that pre-training the CNNs with colon histology data makes them more robust to hyperparameter choice. We then study the CNN performance for prostate cell type classification using supervised, transfer and semi-supervised learning in the small data setting. Our results show that transfer learning can be detrimental to performance but semi-supervised learning is able to provide significant improvements to the learning curve, allowing the training of neural networks with modest amounts of labelled data. We then propose a novel semi-supervised learning method called Deeply-supervised Exemplar CNNs and demonstrate their ability to improve the cell type classifier learning curves at a much better rate than previous semi-supervised neural network methods

    Identification of Individual Glandular Regions Using LCWT and Machine Learning Techniques

    Full text link
    A new approach for the segmentation of gland units in histological images is proposed with the aim of contributing to the improvement of the prostate cancer diagnosis. Clustering methods on several colour spaces are applied to each sample in order to generate a binary mask of the different tissue components. From the mask of lumen candidates, the Locally Constrained Watershed Transform (LCWT) is applied as a novel gland segmentation technique never before used in this type of images. 500 random gland candidates, both benign and pathological, are selected to evaluate the LCWT technique providing results of Dice coefficient of 0.85. Several shape and textural descriptors in combination with contextual features and a fractal analysis are applied, in a novel way, on different colour spaces achieving a total of 297 features to discern between artefacts and true glands. The most relevant features are then selected by an exhaustive statistical analysis in terms of independence between variables and dependence with the class. 3.200 artefacts, 3.195 benign glands and 3.000 pathological glands are obtained, from a data set of 1468 images at 10x magnification. A careful strategy of data partition is implemented to robustly address the classification problem between artefacts and glands. Both linear and non-linear approaches are considered using machine learning techniques based on Support Vector Machines (SVM) and feedforward neural networks achieving values of sensitivity, specificity and accuracy of 0.92, 0.97 and 0.95, respectivelyThis work has been funded by the Ministry of Economy, Industry and Competitiveness under the SICAP project (DPI2016-77869-C2-1-R). The work of Adri´an Colomer has been supported by the Spanish FPI Grant BES-2014-067889. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this researchGarcía-Pardo, JG.; Colomer, A.; Naranjo Ornedo, V.; Peñaranda, F.; Sales, MÁ. (2018). Identification of Individual Glandular Regions Using LCWT and Machine Learning Techniques. En Intelligent Data Engineering and Automated Learning – IDEAL 2018. Springer. 642-650. https://doi.org/10.1007/978-3-030-03493-1_67S642650Gleason, D.F.: Histologic grading and clinical staging of prostatic carcinoma. In: Urologic Pathology (1977)Naik, S., Doyle, S., Feldman, M., Tomaszewski, J., Madabhushi, A.: Gland segmentation and computerized gleason grading of prostate histology by integrating low-, high-level and domain specific information. In: MIAAB Workshop, pp. 1–8 (2007)Nguyen, K., Sabata, B., Jain, A.K.: Prostate cancer grading: gland segmentation and structural features. Pattern Recogn. Lett. 33(7), 951–961 (2012)Kwak, J.T., Hewitt, S.M.: Multiview boosting digital pathology analysis of prostate cancer. Comput. Methods Programs Biomed. 142, 91–99 (2017)Ren, J., Sadimin, E., Foran, D.J., Qi, X.: Computer aided analysis of prostate histopathology images to support a refined gleason grading system. In: SPIE Medical Imaging, International Society for Optics and Photonics, p. 101331V (2017)Soille, P.: Morphological Image Analysis: Principles and Applications. Springer, Berlin (2013)Nguyen, K., Sarkar, A., Jain, A.K.: Structure and context in prostatic gland segmentation and classification. In: Ayache, N., Delingette, H., Golland, P., Mori, K. (eds.) MICCAI 2012. LNCS, vol. 7510, pp. 115–123. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33415-3_15Beare, R.: A locally constrained watershed transform. IEEE Trans. Pattern Anal. Mach. Intell. 28(7), 1063–1074 (2006)Gertych, A., et al.: Machine learning approaches to analyze histological images of tissues from radical prostatectomies. Comput. Med. Imaging Graph. 46, 197–208 (2015)Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)Guo, Z., Zhang, L., Zhang, D.: A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 19(6), 1657–1663 (2010)Huang, P., Lee, C.: Automatic classification for pathological prostate images based on fractal analysis. IEEE Trans. Med. Imaging 28(7), 1037–1050 (2009)Ruifrok, A.C., Johnston, D.A., et al.: Quantification of histochemical staining by color deconvolution. Anal. Quant. Cytol. Histol. 23(4), 291–299 (2001

    Systems pathology by multiplexed immunohistochemistry and whole-slide digital image analysis

    Get PDF
    The paradigm of molecular histopathology is shifting from a single-marker immunohistochemistry towards multiplexed detection of markers to better understand the complex pathological processes. However, there are no systems allowing multiplexed IHC (mIHC) with high-resolution whole-slide tissue imaging and analysis, yet providing feasible throughput for routine use. We present an mIHC platform combining fluorescent and chromogenic staining with automated whole-slide imaging and integrated whole-slide image analysis, enabling simultaneous detection of six protein markers and nuclei, and automatic quantification and classification of hundreds of thousands of cells in situ in formalin-fixed paraffin-embedded tissues. In the first proof-of-concept, we detected immune cells at cell-level resolution (n = 128,894 cells) in human prostate cancer, and analysed T cell subpopulations in different tumour compartments (epithelium vs. stroma). In the second proof-of-concept, we demonstrated an automatic classification of epithelial cell populations (n = 83,558) and glands (benign vs. cancer) in prostate cancer with simultaneous analysis of androgen receptor (AR) and alpha-methylacyl-CoA (AMACR) expression at cell-level resolution. We conclude that the open-source combination of 8-plex mIHC detection, whole-slide image acquisition and analysis provides a robust tool allowing quantitative, spatially resolved whole-slide tissue cytometry directly in formalin-fixed human tumour tissues for improved characterization of histology and the tumour microenvironment.Peer reviewe

    Local structure prediction for gland segmentation

    Get PDF
    We present a method to segment individual glands from colon histopathology images. Segmentation based on sliding window classification does not usually make explicit use of information about the spatial configurations of class labels. To improve on this we propose to segment glands using a structure learning approach in which the local label configurations (structures) are considered when training a support vector machine classifier. The proposed method not only distinguishes foreground from background, it also distinguishes between different local structures in pixel labelling, e.g. locations between adjacent glands and locations far from glands. It directly predicts these label configurations at test time. Experiments demonstrate that it produces better segmentations than when the local label structure is not used to train the classifier
    corecore