33 research outputs found

    Computer-Assisted Annotation of Digital H&E/SOX10 Dual Stains Generates High-Performing Convolutional Neural Network for Calculating Tumor Burden in H&E-Stained Cutaneous Melanoma

    Get PDF
    Deep learning for the analysis of H&E stains requires a large annotated training set. This may form a labor-intensive task involving highly skilled pathologists. We aimed to optimize and evaluate computer-assisted annotation based on digital dual stains of the same tissue section. H&E stains of primary and metastatic melanoma (N = 77) were digitized, re-stained with SOX10, and re-scanned. Because images were aligned, annotations of SOX10 image analysis were directly transferred to H&E stains of the training set. Based on 1,221,367 annotated nuclei, a convolutional neural network for calculating tumor burden (CNN(TB)) was developed. For primary melanomas, precision of annotation was 100% (95%CI, 99% to 100%) for tumor cells and 99% (95%CI, 98% to 100%) for normal cells. Due to low or missing tumor-cell SOX10 positivity, precision for normal cells was markedly reduced in lymph-node and organ metastases compared with primary melanomas (p < 0.001). Compared with stereological counts within skin lesions, mean difference in tumor burden was 6% (95%CI, −1% to 13%, p = 0.10) for CNN(TB) and 16% (95%CI, 4% to 28%, p = 0.02) for pathologists. Conclusively, the technique produced a large annotated H&E training set with high quality within a reasonable timeframe for primary melanomas and subcutaneous metastases. For these lesion types, the training set generated a high-performing CNN(TB), which was superior to the routine assessments of pathologists

    A new method to validate thoracic CT-CT deformable image registration using auto-segmented 3D anatomical landmarks

    No full text
    <div><p>ABSTRACT</p><p><b>Background.</b> Deformable image registrations are prone to errors in aligning reliable anatomically features. Consequently, identification of registration inaccuracies is important. Particularly thoracic three-dimensional (3D) computed tomography (CT)-CT image registration is challenging due to lack of contrast in lung tissue. This study aims for validation of thoracic CT-CT image registration using auto-segmented anatomically landmarks.</p><p><b>Material and methods</b>. Five lymphoma patients were CT scanned three times within a period of 18 months, with the initial CT defined as the reference scan. For each patient the two successive CT scans were registered to the reference CT using three different image registration algorithms (Demons, B-spline and Affine). The image registrations were evaluated using auto-segmented anatomical landmarks (bronchial branch points) and Dice Similarity Coefficients (DSC). Deviation of corresponding bronchial landmarks were used to quantify inaccuracies in respect of both misalignment and geometric location within lungs.</p><p><b>Results</b>. The median bronchial branch point deviations were 1.6, 1.1 and 4.2 (mm) for the three tested algorithms (Demons, B-spline and Affine). The maximum deviations (> 15 mm) were found within both Demons and B-spline image registrations. In the upper part of the lungs the median deviation of 1.7 (mm) was significantly different (p < 0.02) relative to the median deviations of 2.0 (mm), found in the middle and lower parts of the lungs. The DSC revealed similar registration discrepancies among the three tested algorithms, with DSC values of 0.96, 0.97 and 0.91, for respectively Demons, B-spline and the Affine algorithms.</p><p><b>Conclusion.</b> Bronchial branch points were found useful to validate thoracic CT-CT image registration. Bronchial branch points identified local registration errors > 15 mm in both Demons and B-spline deformable algorithms.</p></div
    corecore