8 research outputs found

    Artificial Intelligence to Detect Papilledema from Ocular Fundus Photographs.

    Get PDF
    BACKGROUND: Nonophthalmologist physicians do not confidently perform direct ophthalmoscopy. The use of artificial intelligence to detect papilledema and other optic-disk abnormalities from fundus photographs has not been well studied. METHODS: We trained, validated, and externally tested a deep-learning system to classify optic disks as being normal or having papilledema or other abnormalities from 15,846 retrospectively collected ocular fundus photographs that had been obtained with pharmacologic pupillary dilation and various digital cameras in persons from multiple ethnic populations. Of these photographs, 14,341 from 19 sites in 11 countries were used for training and validation, and 1505 photographs from 5 other sites were used for external testing. Performance at classifying the optic-disk appearance was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard of clinical diagnoses by neuro-ophthalmologists. RESULTS: The training and validation data sets from 6779 patients included 14,341 photographs: 9156 of normal disks, 2148 of disks with papilledema, and 3037 of disks with other abnormalities. The percentage classified as being normal ranged across sites from 9.8 to 100%; the percentage classified as having papilledema ranged across sites from zero to 59.5%. In the validation set, the system discriminated disks with papilledema from normal disks and disks with nonpapilledema abnormalities with an AUC of 0.99 (95% confidence interval [CI], 0.98 to 0.99) and normal from abnormal disks with an AUC of 0.99 (95% CI, 0.99 to 0.99). In the external-testing data set of 1505 photographs, the system had an AUC for the detection of papilledema of 0.96 (95% CI, 0.95 to 0.97), a sensitivity of 96.4% (95% CI, 93.9 to 98.3), and a specificity of 84.7% (95% CI, 82.3 to 87.1). CONCLUSIONS: A deep-learning system using fundus photographs with pharmacologically dilated pupils differentiated among optic disks with papilledema, normal disks, and disks with nonpapilledema abnormalities. (Funded by the Singapore National Medical Research Council and the SingHealth Duke-NUS Ophthalmology and Visual Sciences Academic Clinical Program.)

    Artificial intelligence to detect papilledema from ocular fundus photographs

    No full text
    BACKGROUND: Nonophthalmologist physicians do not confidently perform direct ophthalmos-copy. The use of artificial intelligence to detect papilledema and other optic-disk abnormalities from fundus photographs has not been well studied METHODS: We trained, validated, and externally tested a deep-learning system to classify optic disks as being normal or having papilledema or other abnormalities from 15,846 retrospectively collected ocular fundus photographs that had been obtained with pharmacologic pupillary dilation and various digital cameras in persons from multiple ethnic populations. Of these photographs, 14,341 from 19 sites in 11 coun-tries were used for training and validation, and 1505 photographs from 5 other sites were used for external testing. Performance at classifying the optic-disk ap-pearance was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard of clinical diagnoses by neuro-ophthalmologists RESULTS: The training and validation data sets from 6779 patients included 14,341 photo-graphs: 9156 of normal disks, 2148 of disks with papilledema, and 3037 of disks with other abnormalities. The percentage classified as being normal ranged across sites from 9.8 to 100%; the percentage classified as having papilledema ranged across sites from zero to 59.5%. In the validation set, the system discriminated disks with papilledema from normal disks and disks with nonpapilledema abnor-malities with an AUC of 0.99 (95% confidence interval [CI], 0.98 to 0.99) and normal from abnormal disks with an AUC of 0.99 (95% CI, 0.99 to 0.99). In the external-testing data set of 1505 photographs, the system had an AUC for the detection of papilledema of 0.96 (95% CI, 0.95 to 0.97), a sensitivity of 96.4% (95% CI, 93.9 to 98.3), and a specificity of 84.7% (95% CI, 82.3 to 87.1).CONCLUSIONSA deep-learning system using fundus photographs with pharmacologically dilated pupils differentiated among optic disks with papilledema, normal disks, and disks with nonpapilledema abnormalities. (Funded by the Singapore National Medical Research Council and the SingHealth Duke–NUS Ophthalmology and Visual Sci-ences Academic Clinical Program.

    A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders

    No full text
    The quality of ocular fundus photographs can affect the accuracy of the morphologic assessment of the optic nerve head (ONH), either by humans or by deep learning systems (DLS). In order to automatically identify ONH photographs of optimal quality, we have developed, trained, and tested a DLS, using an international, multicentre, multi-ethnic dataset of 5015 ocular fundus photographs from 31 centres in 20 countries participating to the Brain and Optic Nerve Study with Artificial Intelligence (BONSAI). The reference standard in image quality was established by three experts who independently classified photographs as of “good”, “borderline”, or “poor” quality. The DLS was trained on 4208 fundus photographs and tested on an independent external dataset of 807 photographs, using a multi-class model, evaluated with a one-vs-rest classification strategy. In the external-testing dataset, the DLS could identify with excellent performance “good” quality photographs (AUC = 0.93 (95% CI, 0.91–0.95), accuracy = 91.4% (95% CI, 90.0–92.9%), sensitivity = 93.8% (95% CI, 92.5–95.2%), specificity = 75.9% (95% CI, 69.7–82.1%) and “poor” quality photographs (AUC = 1.00 (95% CI, 0.99–1.00), accuracy = 99.1% (95% CI, 98.6–99.6%), sensitivity = 81.5% (95% CI, 70.6–93.8%), specificity = 99.7% (95% CI, 99.6–100.0%). “Borderline” quality images were also accurately classified (AUC = 0.90 (95% CI, 0.88–0.93), accuracy = 90.6% (95% CI, 89.1–92.2%), sensitivity = 65.4% (95% CI, 56.6–72.9%), specificity = 93.4% (95% CI, 92.1–94.8%). The overall accuracy to distinguish among the three classes was 90.6% (95% CI, 89.1–92.1%), suggesting that this DLS could select optimal quality fundus photographs in patients with neuro-ophthalmic and neurological disorders affecting the ONH

    Detection of Papilledema on Non-Mydriatic Ocular Fundus Photography In the Emergency Department: Application of the BONSAI Deep Learning System (DLS) To the FOTO-ED Study

    No full text
    The FOTO-ED studies showed that ED providers (EDPs) poorly recognized relevant ocular funduscopic findings in patients presenting to the ED with headaches, acute neurologic findings, severe hypertension or visual loss, using direct ophthalmoscopy [0% correctly identified] or on non-mydriatic fundus photography (NMFP), either without or with additional training in photographic interpretation [48% and 43% correctly identified, respectively]. The trained BONSAI-DLS distinguished "normal optic discs", "papilledema", and "other optic disc abnormalities" on mydriatic fundus photographs with high accuracy. We tested the BONSAI-DLS on images prospectively included in the FOTO-ED studies to determine if the BONSAI-DLS could have improved the detection of relevant optic disc abnormalities had it been available to EDPs as a real-time diagnostic aid

    Optic disc classification by deep learning versus expert neuro-ophthalmologists

    No full text
    Objective: To compare the diagnostic performance of an artificial intelligence deep learning system with that of expert neuro-ophthalmologists in classifying optic disc appearance. Methods: The deep learning system was previously trained and validated on 14,341 ocular fundus photographs from 19 international centers. The performance of the system was evaluated on 800 new fundus photographs (400 normal optic discs, 201 papilledema [disc edema from elevated intracranial pressure], 199 other optic disc abnormalities) and compared with that of 2 expert neuro-ophthalmologists who independently reviewed the same randomly presented images without clinical information. Area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity were calculated. Results: The system correctly classified 678 of 800 (84.7%) photographs, compared with 675 of 800 (84.4%) for Expert 1 and 641 of 800 (80.1%) for Expert 2. The system yielded areas under the receiver operating characteristic curve of 0.97 (95% confidence interval [CI] = 0.96-0.98), 0.96 (95% CI = 0.94-0.97), and 0.89 (95% CI = 0.87-0.92) for the detection of normal discs, papilledema, and other disc abnormalities, respectively. The accuracy, sensitivity, and specificity of the system's classification of optic discs were similar to or better than the 2 experts. Intergrader agreement at the eye level was 0.71 (95% CI = 0.67-0.76) between Expert 1 and Expert 2, 0.72 (95% CI = 0.68-0.76) between the system and Expert 1, and 0.65 (95% CI = 0.61-0.70) between the system and Expert 2. Interpretation: The performance of this deep learning system at classifying optic disc abnormalities was at least as good as 2 expert neuro-ophthalmologists. Future prospective studies are needed to validate this system as a diagnostic aid in relevant clinical settings

    Human vs. Machine: The Brain and Optic Nerve Study with Artificial Intelligence (BONSAI) (Slides)

    No full text
    We developed and validated an artificial intelligence deep learning system (AI-DLS) to automatically classify optic discs as "normal" or "abnormal", and specifically detect "papilledema", but direct comparison of the diagnostic accuracy of a DLS versus expert neuro-ophthalmologists using the same sample is warranted. Our objective was to compare the diagnostic performance of an AI-DLS versus expert neuro-o phthalmologists in classifying optic nerves as "normal", "papilledema" (optic nerve edema from proven intracranial hypertension), and "other optic nerve abnormalities" on ocular fundus photographs
    corecore