14 research outputs found

    Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer

    Get PDF
    Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting

    Analysis of Color Standardization Methods for the Automatic Quantification of IHQ Stain in Breast TMA

    No full text
    Introduction/ Background IHC biomarkers in breast TMA samples are used daily in pathology departments. This generates large amounts of information, which requires careful analysis [1]. Automatic methods to evaluate positive staining have been investigated since they may save time and reduce errors in the diagnosis that are due to subjective evaluation. Aims The aim of this work is to develop a density tool able to automatically quantify the positive brown IHC stain in breast TMA. One of the problems when dealing with stained samples is color variation and distortion [2]. This is due to several factors such as the fixation process, the amount of stain or the digitalization process. One solution to the color variation problem is to apply standardization of reagents and procedures in histological practice. However, stains fade over time and therefore, it is not possible to achieve complete standardization with the current technology. In this paper, different methods for stain normalization have been analyzed and compared in density quantification. Methods The methods implemented for stain normalization are based on the use of color distribution by means of dominant color descriptor, scalable and color structure descriptor. These algorithms adjust the color values of an image on a pixel-by-pixel basis so as to match the color distribution of the source image to that of a target image. Then, two main processes were performed to estimate TMA density: a) evaluation of total cylinder area and b) quantification of IHC stained area. For the 1st process, the algorithm distinguishes between normal, broken or distorted cylinders. The 2nd process deals with the evaluation of the positive brown pixels inside the cylinder. The segmentation is based on Lab thersholding together with binary thresholding applied to the H, S and B channels of the HSV and RGB color models. Finally, the tool segments all the positive areas and quantifies the brown density areas. Results A dataset of 879 TMA images were used to evaluate the methods. TMAs were prepared with an automatic tissue arrayer (Arraymold tool) composed of 50 holes/TMA with a cylinder diameter of 2mm. Slides were stained using different IHC stains, that is, CD1A, CD4, CD8, CD21, CD57, CD68, CD83, CD123, CK19, OXP3, LAMp3 and S100. The acquisition of the digital TMA images was done with Aperio ScanScope XT scanner at 40x (0.25 µm/ pixel). Afterwards, each cylinder image was individually extracted [3]. The use of color standardization makes the segmentation robust and free of parameter setting. Furthermore, the standardization process is able to reduce noise and facilitate the density quantification. The results were compared to manual density quantification by expert pathologists. The tests carried out provided up to 98% agreement when color standardization was applied against 90% without color standardization. The biggest error comes from FOXP3 samples

    Desorption of Lipases Immobilized on Octyl-Agarose Beads and Coated with Ionic Polymers after Thermal Inactivation. Stronger Adsorption of Polymers/Unfolded Protein Composites

    Get PDF
    Lipases from Candida antarctica (isoform B) and Rhizomucor miehei (CALB and RML) have been immobilized on octyl-agarose (OC) and further coated with polyethylenimine (PEI) and dextran sulfate (DS). The enzymes just immobilized on OC supports could be easily released from the support using 2% SDS at pH 7, both intact or after thermal inactivation (in fact, after inactivation most enzyme molecules were already desorbed). The coating with PEI and DS greatly reduced the enzyme release during thermal inactivation and improved enzyme stability. However, using OC-CALB/RML-PEI-DS, the full release of the immobilized enzyme to reuse the support required more drastic conditions: a pH value of 3, a buffer concentration over 2 M, and temperatures above 45 °C. However, even these conditions were not able to fully release the thermally inactivated enzyme molecules from the support, being necessary to increase the buffer concentration to 4 M sodium phosphate and decrease the pH to 2.5. The formation of unfolded protein/polymers composites seems to be responsible for this strong interaction between the octyl and some anionic groups of OC supports. The support could be reused five cycles using these conditions with similar loading capacity of the support and stability of the immobilized enzyme

    BONSEYES ::platform for open development of systems of artificial intelligence

    Get PDF
    The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project will be focused on using artificial intelligence in low power Internet of Things (IoT) devices ("edge computing"), embedded computing systems, and data center servers ("cloud computing"). It will bring about orders of magnitude improvements in efficiency, performance, reliability, security, and productivity in the design and programming of systems of artificial intelligence that incorporate Smart Cyber-Physical Systems (CPS). In addition, it will solve a causality problem for organizations who lack access to Data and Models. Its open software architecture will facilitate adoption of the whole concept on a wider scale. To evaluate the effectiveness, technical feasibility, and to quantify the real-world improvements in efficiency, security, performance, effort and cost of adding AI to products and services using the Bonseyes platform, four complementary demonstrators will be built. Bonseyes platform capabilities are aimed at being aligned with the European FI-PPP activities and take advantage of its flagship project FIWARE. This paper provides a description of the project motivation, goals and preliminary work
    corecore