2,086 research outputs found

    Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue

    Full text link
    Histological staining is a vital step used to diagnose various diseases and has been used for more than a century to provide contrast to tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time-consuming, labor-intensive, expensive and destructive to the specimen. Recently, the ability to virtually-stain unlabeled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain specific deep neural networks. Here, we present a new deep learning-based framework which generates virtually-stained images using label-free tissue, where different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information at its input: (1) autofluorescence images of the label-free tissue sample, and (2) a digital staining matrix which represents the desired microscopic map of different stains to be virtually generated at the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabeled kidney tissue sections to generate micro-structured combinations of Hematoxylin and Eosin (H&E), Jones silver stain, and Masson's Trichrome stain. Using a single network, this approach multiplexes virtual staining of label-free tissue with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created on the same tissue cross-section, which is currently not feasible with standard histochemical staining methods.Comment: 19 pages, 5 figures, 2 table

    Differently stained whole slide image registration technique with landmark validation

    Get PDF
    Abstract. One of the most significant features in digital pathology is to compare and fuse successive differently stained tissue sections, also called slides, visually. Doing so, aligning different images to a common frame, ground truth, is required. Current sample scanning tools enable to create images full of informative layers of digitalized tissues, stored with a high resolution into whole slide images. However, there are a limited amount of automatic alignment tools handling large images precisely in acceptable processing time. The idea of this study is to propose a deep learning solution for histopathology image registration. The main focus is on the understanding of landmark validation and the impact of stain augmentation on differently stained histopathology images. Also, the developed registration method is compared with the state-of-the-art algorithms which utilize whole slide images in the field of digital pathology. There are previous studies about histopathology, digital pathology, whole slide imaging and image registration, color staining, data augmentation, and deep learning that are referenced in this study. The goal is to develop a learning-based registration framework specifically for high-resolution histopathology image registration. Different whole slide tissue sample images are used with a resolution of up to 40x magnification. The images are organized into sets of consecutive, differently dyed sections, and the aim is to register the images based on only the visible tissue and ignore the background. Significant structures in the tissue are marked with landmarks. The quality measurements include, for example, the relative target registration error, structural similarity index metric, visual evaluation, landmark-based evaluation, matching points, and image details. These results are comparable and can be used also in the future research and in development of new tools. Moreover, the results are expected to show how the theory and practice are combined in whole slide image registration challenges. DeepHistReg algorithm will be studied to better understand the development of stain color feature augmentation-based image registration tool of this study. Matlab and Aperio ImageScope are the tools to annotate and validate the image, and Python is used to develop the algorithm of this new registration tool. As cancer is globally a serious disease regardless of age or lifestyle, it is important to find ways to develop the systems experts can use while working with patients’ data. There is still a lot to improve in the field of digital pathology and this study is one step toward it.Eri menetelmin värjättyjen virtuaalinäytelasien rekisteröintitekniikka kiintopisteiden validointia hyödyntäen. Tiivistelmä. Yksi tärkeimmistä digitaalipatologian ominaisuuksista on verrata ja fuusioida peräkkäisiä eri menetelmin värjättyjä kudosleikkeitä toisiinsa visuaalisesti. Tällöin keskenään lähes identtiset kuvat kohdistetaan samaan yhteiseen kehykseen, niin sanottuun pohjatotuuteen. Nykyiset näytteiden skannaustyökalut mahdollistavat sellaisten kuvien luonnin, jotka ovat täynnä kerroksittaista tietoa digitalisoiduista näytteistä, tallennettuna erittäin korkean resoluution virtuaalisiin näytelaseihin. Tällä hetkellä on olemassa kuitenkin vain kourallinen automaattisia työkaluja, jotka kykenevät käsittelemään näin valtavia kuvatiedostoja tarkasti hyväksytyin aikarajoin. Tämän työn tarkoituksena on syväoppimista hyväksikäyttäen löytää ratkaisu histopatologisten kuvien rekisteröintiin. Tärkeimpänä osa-alueena on ymmärtää kiintopisteiden validoinnin periaatteet sekä eri väriaineiden augmentoinnin vaikutus. Lisäksi tässä työssä kehitettyä rekisteröintialgoritmia tullaan vertailemaan muihin kirjallisuudessa esitettyihin algoritmeihin, jotka myös hyödyntävät virtuaalinäytelaseja digitaalipatologian saralla. Kirjallisessa osiossa tullaan siteeraamaan aiempia tutkimuksia muun muassa seuraavista aihealueista: histopatologia, digitaalipatologia, virtuaalinäytelasi, kuvantaminen ja rekisteröinti, näytteen värjäys, data-augmentointi sekä syväoppiminen. Tavoitteena on kehittää oppimispohjainen rekisteröintikehys erityisesti korkearesoluutioisille digitalisoiduille histopatologisille kuville. Erilaisissa näytekuvissa tullaan käyttämään jopa 40-kertaista suurennosta. Kuvat kudoksista on järjestetty eri menetelmin värjättyihin peräkkäisiin kuvasarjoihin ja tämän työn päämääränä on rekisteröidä kuvat pohjautuen ainoastaan kudosten näkyviin osuuksiin, jättäen kuvien tausta huomioimatta. Kudosten merkittävimmät rakenteet on merkattu niin sanotuin kiintopistein. Työn laatumittauksina käytetään arvoja, kuten kohteen suhteellinen rekisteröintivirhe (rTRE), rakenteellisen samankaltaisuuindeksin mittari (SSIM), sekä visuaalista arviointia, kiintopisteisiin pohjautuvaa arviointia, yhteensopivuuskohtia, ja kuvatiedoston yksityiskohtia. Nämä arvot ovat verrattavissa myös tulevissa tutkimuksissa ja samaisia arvoja voidaan käyttää uusia työkaluja kehiteltäessä. DeepHistReg metodi toimii pohjana tässä työssä kehitettävälle näytteen värjäyksen parantamiseen pohjautuvalle rekisteröintityökalulle. Matlab ja Aperio ImageScope ovat ohjelmistoja, joita tullaan hyödyntämään tässä työssä kuvien merkitsemiseen ja validointiin. Ohjelmointikielenä käytetään Pythonia. Syöpä on maailmanlaajuisesti vakava sairaus, joka ei katso ikää eikä elämäntyyliä. Siksi on tärkeää löytää uusia keinoja kehittää työkaluja, joita asiantuntijat voivat hyödyntää jokapäiväisessä työssään potilastietojen käsittelyssä. Digitaalipatologian osa-alueella on vielä paljon innovoitavaa ja tämä työ on yksi askel eteenpäin taistelussa syöpäsairauksia vastaan

    Simultaneous automatic scoring and co-registration of hormone receptors in tumour areas in whole slide images of breast cancer tissue slides

    Get PDF
    Aims: Automation of downstream analysis may offer many potential benefits to routine histopathology. One area of interest for automation is in the scoring of multiple immunohistochemical markers in order to predict the patient's response to targeted therapies. Automated serial slide analysis of this kind requires robust registration to identify common tissue regions across sections. We present an automated method for co-localised scoring of Estrogen Receptor and Progesterone Receptor (ER/PR) in breast cancer core biopsies using whole slide images. Methods and Results: Regions of tumour in a series of fifty consecutive breast core biopsies were identified by annotation on H&E whole slide images. Sequentially cut immunohistochemical stained sections were scored manually, before being digitally scanned and then exported into JPEG 2000 format. A two-stage registration process was performed to identify the annotated regions of interest in the immunohistochemistry sections, which were then scored using the Allred system. Overall correlation between manual and automated scoring for ER and PR was 0.944 and 0.883 respectively, with 90% of ER and 80% of PR scores within in one point or less of agreement. Conclusions: This proof of principle study indicates slide registration can be used as a basis for automation of the downstream analysis for clinically relevant biomarkers in the majority of cases. The approach is likely to be improved by implantation of safeguarding analysis steps post registration

    Patch-based nonlinear image registration for gigapixel whole slide images

    Get PDF
    Producción CientíficaImage registration of whole slide histology images allows the fusion of fine-grained information-like different immunohistochemical stains-from neighboring tissue slides. Traditionally, pathologists fuse this information by looking subsequently at one slide at a time. If the slides are digitized and accurately aligned at cell level, automatic analysis can be used to ease the pathologist's work. However, the size of those images exceeds the memory capacity of regular computers. Methods: We address the challenge to combine a global motion model that takes the physical cutting process of the tissue into account with image data that is not simultaneously globally available. Typical approaches either reduce the amount of data to be processed or partition the data into smaller chunks to be processed separately. Our novel method first registers the complete images on a low resolution with a nonlinear deformation model and later refines this result on patches by using a second nonlinear registration on each patch. Finally, the deformations computed on all patches are combined by interpolation to form one globally smooth nonlinear deformation. The NGF distance measure is used to handle multistain images. Results: The method is applied to ten whole slide image pairs of human lung cancer data. The alignment of 85 corresponding structures is measured by comparing manual segmentations from neighboring slides. Their offset improves significantly, by at least 15%, compared to the low-resolution nonlinear registration. Conclusion/Significance: The proposed method significantly improves the accuracy of multistain registration which allows us to compare different antibodies at cell level

    Tissue Phenomics for prognostic biomarker discovery in low- and intermediate-risk prostate cancer

    Get PDF
    Tissue Phenomics is the discipline of mining tissue images to identify patterns that are related to clinical outcome providing potential prognostic and predictive value. This involves the discovery process from assay development, image analysis, and data mining to the final interpretation and validation of the findings. Importantly, this process is not linear but allows backward steps and optimization loops over multiple sub-processes. We provide a detailed description of the Tissue Phenomics methodology while exemplifying each step on the application of prostate cancer recurrence prediction. In particular, we automatically identified tissue-based biomarkers having significant prognostic value for low-and intermediate-risk prostate cancer patients (Gleason scores 6-7b) after radical prostatectomy. We found that promising phenes were related to CD8(+) and CD68(+) cells in the microenvironment of cancerous glands in combination with the local micro-vascularization. Recurrence prediction based on the selected phenes yielded accuracies up to 83% thereby clearly outperforming prediction based on the Gleason score. Moreover, we compared different machine learning algorithms to combine the most relevant phenes resulting in increased accuracies of 88% for tumor progression prediction. These findings will be of potential use for future prognostic tests for prostate cancer patients and provide a proof-of-principle of the Tissue Phenomics approach

    Quantitative methods for MRI-microscopy comparisons

    Get PDF
    Magnetic resonance imaging (MRI) is a powerful tool for the in-vivo diagnosis and assessment of neurodegenerative disorders. While numerous MRI techniques have yielded parameters sensitive to microstructural changes in the brain, MRI parameters are infamously non-specific. Different changes in brain tissue structure may result in the same change in MRI signal, making it challenging to pinpoint the exact source of these signal changes. Biophysical modelling aims to achieve better biological specificity by relating dMRI signals to biologically interpretable tissue parameters. Yet, the fitting of these complex models with many parameters to unremarkable dMRI signals is challenging, often resulting in a degeneracy where multiple combinations of parameters explain the dMRI signal equally well. Conversely, microscopy offers high biological specificity by targetting specific aspects of microstructure. By acquiring and relating MRI and microscopy metrics for the same tissue section, one can therefore leverage microscopy’s specificity to elucidate the microstructural basis of MRI signal change. However, microscopy is incredibly time-intensive, restricting examination to a few tissue sections at a time. A significant gap remains in the lack of a standardised pipeline for MRI-microscopy comparisons, with existing methods necessitating substantial manual intervention. This thesis delves into methods that enhance MRI-microscopy comparisons. Specifically, we introduce an automated pipeline that rapidly and reliably extracts multiple quantitative microscopy parameters from sections that are histologically-stained. Utilising this pipeline alongside high-quality MRI-microscopy co-registrations, we performed whole-slide voxelwise comparisons between multimodal MRI- and microscopy-derived metrics. Finally, we present an alternative analysis method designed to relate degenerate biophysical model parameters to a continuous metric (e.g. from microscopy). Overall, the techniques outlined in this thesis are intended to facilitate a more precise interpretation of microstructural change from MRI parameters and encourage a more systematic approach to processing microscopy data when relating them to MRI data

    Artificial intelligence in histopathology image analysis for cancer precision medicine

    Get PDF
    In recent years, there have been rapid advancements in the field of computational pathology. This has been enabled through the adoption of digital pathology workflows that generate digital images of histopathological slides, the publication of large data sets of these images and improvements in computing infrastructure. Objectives in computational pathology can be subdivided into two categories, first the automation of routine workflows that would otherwise be performed by pathologists and second the addition of novel capabilities. This thesis focuses on the development, application, and evaluation of methods in this second category, specifically the prediction of gene expression from pathology images and the registration of pathology images among each other. In Study I, we developed a computationally efficient cluster-based technique to perform transcriptome-wide predictions of gene expression in prostate cancer from H&E-stained whole-slide-images (WSIs). The suggested method outperforms several baseline methods and is non-inferior to single-gene CNN predictions, while reducing the computational cost with a factor of approximately 300. We included 15,586 transcripts that encode proteins in the analysis and predicted their expression with different modelling approaches from the WSIs. In a cross-validation, 6,618 of these predictions were significantly associated with the RNA-seq expression estimates with FDR-adjusted p-values <0.001. Upon validation of these 6,618 expression predictions in a held-out test set, the association could be confirmed for 5,419 (81.9%). Furthermore, we demonstrated that it is feasible to predict the prognostic cell-cycle progression score with a Spearman correlation to the RNA-seq score of 0.527 [0.357, 0.665]. The objective of Study II is the investigation of attention layers in the context of multiple-instance-learning for regression tasks, exemplified by a simulation study and gene expression prediction. We find that for gene expression prediction, the compared methods are not distinguishable regarding their performance, which indicates that attention mechanisms may not be superior to weakly supervised learning in this context. Study III describes the results of the ACROBAT 2022 WSI registration challenge, which we organised in conjunction with the MICCAI 2022 conference. Participating teams were ranked on the median 90th percentile of distances between registered and annotated target landmarks. Median 90th percentiles for eight teams that were eligible for ranking in the test set consisting of 303 WSI pairs ranged from 60.1 µm to 15,938.0 µm. The best performing method therefore has a score slightly below the median 90th percentile of distances between first and second annotator of 67.0 µm. Study IV describes the data set that we published to facilitate the ACROBAT challenge. The data set is available publicly through the Swedish National Data Service SND and consists of 4,212 WSIs from 1,153 breast cancer patients. Study V is an example of the application of WSI registration for computational pathology. In this study, we investigate the possibility to register invasive cancer annotations from H&E to KI67 WSIs and then subsequently train cancer detection models. To this end, we compare the performance of models optimised with registered annotations to the performance of models that were optimised with annotations generated for the KI67 WSIs. The data set consists of 272 female breast cancer cases, including an internal test set of 54 cases. We find that in this test set, the performance of both models is not distinguishable regarding performance, while there are small differences in model calibration

    Translating AI to digital pathology workflow: Dealing with scarce data and high variation by minimising complexities in data and models

    Get PDF
    The recent conversion to digital pathology using Whole Slide Images (WSIs) from conventional pathology opened the doors for Artificial Intelligence (AI) in pathology workflow. The recent interests in machine learning and deep learning have gained a high interest in medical image processing. However, WSIs differ from generic medical images. WSIs are complex images which can reveal various information to support different diagnosis varying from cancer to unknown underlying conditions which were not discovered in other medical investigations. These investigations require expert knowledge spending a long time for investigations, applying different stains to the WSIs, and comparing the WSIs. Differences in WSI differentiate general machine learning methods that are applied for medical image processing. Co-analysing multistained WSIs, high variation of the WSIs from different sites, and lack of labelled data are the main key interest areas that directly influence in developing machine learning models that support pathologists in their investigations. However, most of the state-ofthe- art machine learning approaches cannot be applied in the general clinical workflow without using high compute power, expert knowledge, and time. Therefore, this thesis explores avenues to translate the highly computational and time intensive model to a clinical workflow. Co-analysing multi-stained WSIs require registering differently stained WSI together. In order to get a high precision in the registration exploring nonrigid and rigid transformation is required. The non-rigid transformation requires complex deep learning approaches. Using super-convergence on a small Convolutional Neural Network model it is possible to achieve high precision compared to larger auto-encoders and other state-of-the-art models. High variation of the WSIs from different sites heavily effect machine learning models in their predictions. The thesis presents an approach of using a pre-trained model by using only a small number of samples from the new site. Therefore, re-training larger deep learning models are not required which saves expert time for re-labelling and computational power. Finally, lack of labelled data is one of the main issues in training any supervised machine learning or deep learning model. Using a Generative Adversarial Networks (GAN) is an approach which can be easily implemented to avoid this issue. However, GANs are time and computationally expensive. These are not applicable in a general clinical workflow. Therefore, this thesis presents an approach using a simpler GANthat can generate accurate sample labelled data. The synthetic data are used to train classifier and the thesis demonstrates that the predictive model can generate higher accuracy in the test environment. This thesis demonstrates that machine learning and deep learning models can be applied to a clinical workflow, without exploiting expert time and high computing power

    Non-rigid registration on histopathological breast cancer images using deep learning

    Get PDF
    Cancer is one of the leading causes of death in the world, in particular, breast cancer is the most frequent in women. Early detection of this disease can significantly increase the survival rate. However, the diagnosis is difficult and time-consuming. Hence, many artificial intelligence applications have been deployed to speed up this procedure. In this MSc thesis, we propose an automatic framework that could help pathologists to improve and speed up the first step of the diagnosis of cancer. It will facilitate the cross-slide analysis of different tissue samples extracted from a selected area where cancer could be present. It will allow either pathologists to easily compare tissue structures to understand the disease's seriousness or the automatic analysis algorithms to work with several stains at once. The proposed method tries to align pairs of high-resolution histological images, curving and stretching part of the tissue by applying a deformation field to one image of the pair

    The ACROBAT 2022 Challenge: Automatic Registration Of Breast Cancer Tissue

    Full text link
    The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results establish the current state-of-the-art in WSI registration and guide researchers in selecting and developing methods
    corecore