14 research outputs found

    Predicting breast tumor proliferation from whole-slide images : the TUPAC16 challenge

    Get PDF
    Tumor proliferation is an important biomarker indicative of the prognosis of breast cancer patients. Assessment of tumor proliferation in a clinical setting is a highly subjective and labor-intensive task. Previous efforts to automate tumor proliferation assessment by image analysis only focused on mitosis detection in predefined tumor regions. However, in a real-world scenario, automatic mitosis detection should be performed in whole-slide images (WSIs) and an automatic method should be able to produce a tumor proliferation score given a WSI as input. To address this, we organized the TUmor Proliferation Assessment Challenge 2016 (TUPAC16) on prediction of tumor proliferation scores from WSIs. The challenge dataset consisted of 500 training and 321 testing breast cancer histopathology WSIs. In order to ensure fair and independent evaluation, only the ground truth for the training dataset was provided to the challenge participants. The first task of the challenge was to predict mitotic scores, i.e., to reproduce the manual method of assessing tumor proliferation by a pathologist. The second task was to predict the gene expression based PAM50 proliferation scores from the WSI. The best performing automatic method for the first task achieved a quadratic-weighted Cohen's kappa score of κ = 0.567, 95% CI [0.464, 0.671] between the predicted scores and the ground truth. For the second task, the predictions of the top method had a Spearman's correlation coefficient of r = 0.617, 95% CI [0.581 0.651] with the ground truth. This was the first comparison study that investigated tumor proliferation assessment from WSIs. The achieved results are promising given the difficulty of the tasks and weakly-labeled nature of the ground truth. However, further research is needed to improve the practical utility of image analysis methods for this task

    Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer

    Get PDF
    Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting

    Adaptive Region-Based Approaches for Cellular Segmentation of Bright-Field Microscopy Images

    Get PDF
    Microscopy image processing is an emerging and quickly growing field in medical imaging research area. Recent advancements in technology including higher computation power, larger and cheaper storage modules, and more efficient and faster data acquisition devices such as whole-slide imaging scanners contributed to the recent microscopy image processing research advancement. Most of the methods in this research area either focus on automatically process images and make it easier for pathologists to direct their focus on the important regions in the image, or they aim to automate the whole job of experts including processing and classifying images or tissues that leads to disease diagnosis. This dissertation is consisted of four different frameworks to process microscopy images. All of them include methods for segmentation either as the whole suggested framework or the initial part of the framework for future feature extraction and classification. Specifically, the first proposed framework is a general segmentation method that works on histology images from different tissues and segments relatively solid nuclei in the image, and the next three frameworks work on cervical microscopy images, segmenting cervical nuclei/cells. Two of these frameworks focus on cervical tissue segmentation and classification using histology images and the last framework is a comprehensive segmentation framework that segments overlapping cervical cells in cervical cytology Pap smear images. One of the several commonalities among these frameworks is that they all work at the region level and use different region features to segment regions and later either expand, split or refine the segmented regions to produce the final segmentation output. Moreover, all proposed frameworks work relatively much faster than other methods on the same datasets. Finally, proving ground truth for datasets to be used in the training phase of microscopy image processing algorithms is relatively time-consuming, complicated and costly. Therefore, I designed the frameworks in such a way that they set most (if not all) of the parameters adaptively based on each image that is being processed at the time. All of the included frameworks either do not depend on training datasets at all (first three of the four discussed frameworks) or need very small training datasets to learn or set a few parameters

    Adaptive Region-Based Approaches for Cellular Segmentation of Bright-Field Microscopy Images

    No full text
    Microscopy image processing is an emerging and quickly growing field in medical imaging research area. Recent advancements in technology including higher computation power, larger and cheaper storage modules, and more efficient and faster data acquisition devices such as whole-slide imaging scanners contributed to the recent microscopy image processing research advancement. Most of the methods in this research area either focus on automatically process images and make it easier for pathologists to direct their focus on the important regions in the image, or they aim to automate the whole job of experts including processing and classifying images or tissues that leads to disease diagnosis. This dissertation is consisted of four different frameworks to process microscopy images. All of them include methods for segmentation either as the whole suggested framework or the initial part of the framework for future feature extraction and classification. Specifically, the first proposed framework is a general segmentation method that works on histology images from different tissues and segments relatively solid nuclei in the image, and the next three frameworks work on cervical microscopy images, segmenting cervical nuclei/cells. Two of these frameworks focus on cervical tissue segmentation and classification using histology images and the last framework is a comprehensive segmentation framework that segments overlapping cervical cells in cervical cytology Pap smear images. One of the several commonalities among these frameworks is that they all work at the region level and use different region features to segment regions and later either expand, split or refine the segmented regions to produce the final segmentation output. Moreover, all proposed frameworks work relatively much faster than other methods on the same datasets. Finally, proving ground truth for datasets to be used in the training phase of microscopy image processing algorithms is relatively time-consuming, complicated and costly. Therefore, I designed the frameworks in such a way that they set most (if not all) of the parameters adaptively based on each image that is being processed at the time. All of the included frameworks either do not depend on training datasets at all (first three of the four discussed frameworks) or need very small training datasets to learn or set a few parameters

    Unbiased Estimation of Cell Number Using the Automatic Optical Fractionator

    No full text
    A novel stereology approach, the automatic optical fractionator, is presented for obtaining unbiased and efficient estimates of the number of cells in tissue sections. Used in combination with existing segmentation algorithms and ordinary immunostaining methods, automatic estimates of cell number are obtainable from extended depth of field images built from three-dimensional volumes of tissue (disector stacks). The automatic optical fractionator is more accurate, 100% objective and 8-10 times faster than the manual optical fractionator. An example of the automatic fractionator is provided for counts of immunostained neurons in neocortex of a genetically modified mouse model of neurodegeneration. Evidence is presented for the often overlooked prerequisite that accurate counting by the optical fractionator requires a thin focal plane generated by a high optical resolution lens
    corecore