15 research outputs found

    Plus Disease in Retinopathy of Prematurity: Diagnostic Trends in 2016 Versus 2007

    No full text
    To identify any temporal trends in the diagnosis of plus disease in retinopathy of prematurity (ROP) by experts. Reliability analysis. ROP experts were recruited in 2007 and 2016 to classify 34 wide-field fundus images of ROP as plus, pre-plus, or normal, coded as “3,” “2,” and “1,” respectively, in the database. The main outcome was the average calculated score for each image in each cohort. Secondary outcomes included correlation on the relative ordering of the images in 2016 vs 2007, interexpert agreement, and intraexpert agreement. The average score for each image was higher for 30 of 34 (88%) images in 2016 compared with 2007, influenced by fewer images classified as normal (P < .01), a similar number of pre-plus (P = .52), and more classified as plus (P < .01). The mean weighted kappa values in 2006 were 0.36 (range 0.21–0.60), compared with 0.22 (range 0–0.40) in 2016. There was good correlation between rankings of disease severity between the 2 cohorts (Spearman rank correlation ρ = 0.94), indicating near-perfect agreement on relative disease severity. Despite good agreement between cohorts on relative disease severity ranking, the higher average score and classifications for each image demonstrate that experts are diagnosing pre-plus and plus disease at earlier stages of disease severity in 2016, compared with 2007. This has implications for patient care, research, and teaching, and additional studies are needed to better understand this temporal trend in image-based plus disease diagnosis

    Plus Disease in Retinopathy of Prematurity

    No full text
    To identify patterns of interexpert discrepancy in plus disease diagnosis in retinopathy of prematurity (ROP). We developed 2 datasets of clinical images as part of the Imaging and Informatics in ROP study and determined a consensus reference standard diagnosis (RSD) for each image based on 3 independent image graders and the clinical examination results. We recruited 8 expert ROP clinicians to classify these images and compared the distribution of classifications between experts and the RSD. Eight participating experts with more than 10 years of clinical ROP experience and more than 5 peer-reviewed ROP publications who analyzed images obtained during routine ROP screening in neonatal intensive care units. Expert classification of images of plus disease in ROP. Interexpert agreement (weighted κ statistic) and agreement and bias on ordinal classification between experts (analysis of variance [ANOVA]) and the RSD (percent agreement). There was variable interexpert agreement on diagnostic classifications between the 8 experts and the RSD (weighted κ, 0–0.75; mean, 0.30). The RSD agreement ranged from 80% to 94% for the dataset of 100 images and from 29% to 79% for the dataset of 34 images. However, when images were ranked in order of disease severity (by average expert classification), the pattern of expert classification revealed a consistent systematic bias for each expert consistent with unique cut points for the diagnosis of plus disease and preplus disease. The 2-way ANOVA model suggested a highly significant effect of both image and user on the average score (dataset A: P < 0.05 and adjusted R2 = 0.82; and dataset B: P < 0.05 and adjusted R2 = 0.6615). There is wide variability in the classification of plus disease by ROP experts, which occurs because experts have different cut points for the amounts of vascular abnormality required for presence of plus and preplus disease. This has important implications for research, teaching, and patient care for ROP and suggests that a continuous ROP plus disease severity score may reflect more accurately the behavior of expert ROP clinicians and may better standardize classification in the future
    corecore