5 research outputs found

    A novel deep learning-based point-of-care diagnostic method for detecting Plasmodium falciparum with fluorescence digital microscopy

    Get PDF
    Background Malaria remains a major global health problem with a need for improved field-usable diagnostic tests. We have developed a portable, low-cost digital microscope scanner, capable of both brightfield and fluorescence imaging. Here, we used the instrument to digitize blood smears, and applied deep learning (DL) algorithms to detect Plasmodium falciparum parasites. Methods Thin blood smears (n = 125) were collected from patients with microscopy-confirmed P. falciparum infections in rural Tanzania, prior to and after initiation of artemisinin-based combination therapy. The samples were stained using the 4′,6-diamidino-2-phenylindole fluorogen and digitized using the prototype microscope scanner. Two DL algorithms were trained to detect malaria parasites in the samples, and results compared to the visual assessment of both the digitized samples, and the Giemsa-stained thick smears. Results Detection of P. falciparum parasites in the digitized thin blood smears was possible both by visual assessment and by DL-based analysis with a strong correlation in results (r = 0.99, p <0.01). A moderately strong correlation was observed between the DL-based thin smear analysis and the visual thick smear-analysis (r = 0.74, p <0.01). Low levels of parasites were detected by DL-based analysis on day three following treatment initiation, but a small number of fluorescent signals were detected also in microscopy-negative samples. Conclusion Quantification of P. falciparum parasites in DAPI-stained thin smears is feasible using DL-supported, point-of-care digital microscopy, with a high correlation to visual assessment of samples. Fluorescent signals from artefacts in samples with low infection levels represented the main challenge for the digital analysis, thus highlighting the importance of minimizing sample contaminations. The proposed method could support malaria diagnostics and monitoring of treatment response through automated quantification of parasitaemia and is likely to be applicable also for diagnostics of other Plasmodium species and other infectious diseases.Peer reviewe

    Antibody Supervised Training of a Deep Learning Based Algorithm for Leukocyte Segmentation in Papillary Thyroid Carcinoma

    Get PDF
    The quantity of leukocytes in papillary thyroid carcinoma (PTC) potentially have prognostic and treatment predictive value. Here, we propose a novel method for training a convolutional neural network (CNN) algorithm for segmenting leukocytes in PTCs. Tissue samples from two retrospective PTC cohort were obtained and representative tissue slides from twelve patients were stained with hematoxylin and eosin (HE) and digitized. Then, the HE slides were destained and restained immunohistochemically (IHC) with antibodies to the pan-leukocyte anti CD45 antigen and scanned again. The two stain-pairs of all representative tissue slides were registered, and image tiles of regions of interests were exported. The image tiles were processed and the 3,3'-diaminobenzidine (DAB) stained areas representing anti CD45 expression were turned into binary masks. These binary masks were applied as annotations on the HE image tiles and used in the training of a CNN algorithm. Ten whole slide images (WSIs) were used for training using a five-fold cross-validation and the remaining two slides were used as an independent test set for the trained model. For visual evaluation, the algorithm was run on all twelve WSIs, and in total 238,144 tiles sized 500x500 pixels were analyzed. The trained CNN algorithm had an intersection over union of 0.82 for detection of leukocytes in the HE image tiles when comparing the prediction masks to the ground truth anti CD45 mask. We conclude that this method for generating antibody supervised annotations using the destain-restain IHC guided annotations resulted in high accuracy segmentations of leukocytes in HE tissue images.Peer reviewe

    Deep learning identifies morphological features in breast cancer predictive of cancer ERBB2 status and trastuzumab treatment efficacy

    Get PDF
    The treatment of patients with ERBB2 (HER2)-positive breast cancer with anti-ERBB2 therapy is based on the detection of ERBB2 gene amplification or protein overexpression. Machine learning (ML) algorithms can predict the amplification of ERBB2 based on tumor morphological features, but it is not known whether ML-derived features can predict survival and efficacy of anti-ERBB2 treatment. In this study, we trained a deep learning model with digital images of hematoxylin–eosin (H&E)-stained formalin-fixed primary breast tumor tissue sections, weakly supervised by ERBB2 gene amplification status. The gene amplification was determined by chromogenic in situ hybridization (CISH). The training data comprised digitized tissue microarray (TMA) samples from 1,047 patients. The correlation between the deep learning–predicted ERBB2 status, which we call H&E-ERBB2 score, and distant disease-free survival (DDFS) was investigated on a fully independent test set, which included whole-slide tumor images from 712 patients with trastuzumab treatment status available. The area under the receiver operating characteristic curve (AUC) in predicting gene amplification in the test sets was 0.70 (95% CI, 0.63–0.77) on 354 TMA samples and 0.67 (95% CI, 0.62–0.71) on 712 whole-slide images. Among patients with ERBB2-positive cancer treated with trastuzumab, those with a higher than the median morphology–based H&E-ERBB2 score derived from machine learning had more favorable DDFS than those with a lower score (hazard ratio [HR] 0.37; 95% CI, 0.15–0.93; P = 0.034). A high H&E-ERBB2 score was associated with unfavorable survival in patients with ERBB2-negative cancer as determined by CISH. ERBB2-associated morphology correlated with the efficacy of adjuvant anti-ERBB2 treatment and can contribute to treatment-predictive information in breast cancer.Peer reviewe

    Outcome and biomarker supervised deep learning for survival prediction in two multicenter breast cancer series

    Get PDF
    Funding Information: We would like to thank the Digital Microscopy and Molecular Pathology unit at Institute for Molecular Medicine Finland FIMM, University of Helsinki, supported by the Helsinki Institute of Life Science and Biocenter Finland for providing Publisher Copyright: © 2022 Journal of Pathology Informatics | Published by Wolters Kluwer - Medknow.Background: Prediction of clinical outcomes for individual cancer patients is an important step in the disease diagnosis and subsequently guides the treatment and patient counseling. In this work, we develop and evaluate a joint outcome and biomarker supervised (estrogen receptor expression and ERBB2 expression and gene amplification) multitask deep learning model for prediction of outcome in breast cancer patients in two nation-wide multicenter studies in Finland (the FinProg and FinHer studies). Our approach combines deep learning with expert knowledge to provide more accurate, robust, and integrated prediction of breast cancer outcomes. Materials and Methods: Using deep learning, we trained convolutional neural networks (CNNs) with digitized tissue microarray (TMA) samples of primary hematoxylin-eosin-stained breast cancer specimens from 693 patients in the FinProg series as input and breast cancer-specific survival as the endpoint. The trained algorithms were tested on 354 TMA patient samples in the same series. An independent set of whole-slide (WS) tumor samples from 674 patients in another multicenter study (FinHer) was used to validate and verify the generalization of the outcome prediction based on CNN models by Cox survival regression and concordance index (c-index). Visual cancer tissue characterization, i.e., number of mitoses, tubules, nuclear pleomorphism, tumor-infiltrating lymphocytes, and necrosis was performed on TMA samples in the FinProg test set by a pathologist and combined with deep learning-based outcome prediction in a multitask algorithm. Results: The multitask algorithm achieved a hazard ratio (HR) of 2.0 (95% confidence interval [CI] 1.30-3.00), P < 0.001, c-index of 0.59 on the 354 test set of FinProg patients, and an HR of 1.7 (95% CI 1.2-2.6), P = 0.003, c-index 0.57 on the WS tumor samples from 674 patients in the independent FinHer series. The multitask CNN remained a statistically independent predictor of survival in both test sets when adjusted for histological grade, tumor size, and axillary lymph node status in a multivariate Cox analyses. An improved accuracy (c-index 0.66) was achieved when deep learning was combined with the tissue characteristics assessed visually by a pathologist. Conclusions: A multitask deep learning algorithm supervised by both patient outcome and biomarker status learned features in basic tissue morphology predictive of survival in a nationwide, multicenter series of patients with breast cancer. The algorithms generalized to another independent multicenter patient series and whole-slide breast cancer samples and provide prognostic information complementary to that of a comprehensive series of established prognostic factors.Peer reviewe

    Outcome and biomarker supervised deep learning for survival prediction in two multicenter breast cancer series

    No full text
    Abstract Background: Prediction of clinical outcomes for individual cancer patients is an important step in the disease diagnosis and subsequently guides the treatment and patient counseling. In this work, we develop and evaluate a joint outcome and biomarker supervised (estrogen receptor expression and ERBB2 expression and gene amplification) multitask deep learning model for prediction of outcome in breast cancer patients in two nation-wide multicenter studies in Finland (the FinProg and FinHer studies). Our approach combines deep learning with expert knowledge to provide more accurate, robust, and integrated prediction of breast cancer outcomes. Materials and Methods: Using deep learning, we trained convolutional neural networks (CNNs) with digitized tissue microarray (TMA) samples of primary hematoxylin-eosin-stained breast cancer specimens from 693 patients in the FinProg series as input and breast cancer-specific survival as the endpoint. The trained algorithms were tested on 354 TMA patient samples in the same series. An independent set of whole-slide (WS) tumor samples from 674 patients in another multicenter study (FinHer) was used to validate and verify the generalization of the outcome prediction based on CNN models by Cox survival regression and concordance index (c-index). Visual cancer tissue characterization, i.e., number of mitoses, tubules, nuclear pleomorphism, tumor-infiltrating lymphocytes, and necrosis was performed on TMA samples in the FinProg test set by a pathologist and combined with deep learning-based outcome prediction in a multitask algorithm. Results: The multitask algorithm achieved a hazard ratio (HR) of 2.0 (95% confidence interval [CI] 1.30–3.00), P &lt; 0.001, c-index of 0.59 on the 354 test set of FinProg patients, and an HR of 1.7 (95% CI 1.2–2.6), P = 0.003, c-index 0.57 on the WS tumor samples from 674 patients in the independent FinHer series. The multitask CNN remained a statistically independent predictor of survival in both test sets when adjusted for histological grade, tumor size, and axillary lymph node status in a multivariate Cox analyses. An improved accuracy (c-index 0.66) was achieved when deep learning was combined with the tissue characteristics assessed visually by a pathologist. Conclusions: A multitask deep learning algorithm supervised by both patient outcome and biomarker status learned features in basic tissue morphology predictive of survival in a nationwide, multicenter series of patients with breast cancer. The algorithms generalized to another independent multicenter patient series and whole-slide breast cancer samples and provide prognostic information complementary to that of a comprehensive series of established prognostic factors
    corecore