13 research outputs found

    糖尿病網膜症患者における脈絡膜厚の変化

    Get PDF
    広島大学(Hiroshima University)博士(医学)Doctor of Philosophy in Medical Sciencedoctora

    Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy

    Get PDF
    Purpose We investigated using ultrawide-field fundus images with a deep convolutional neural network (DCNN), which is a machine learning technology, to detect treatment-naïve proliferative diabetic retinopathy (PDR). Methods We conducted training with the DCNN using 378 photographic images (132 PDR and 246 non-PDR) and constructed a deep learning model. The area under the curve (AUC), sensitivity, and specificity were examined. Result The constructed deep learning model demonstrated a high sensitivity of 94.7% and a high specificity of 97.2%, with an AUC of 0.969. Conclusion Our findings suggested that PDR could be diagnosed using wide-angle camera images and deep learning

    Accuracy of Diabetic Retinopathy Staging with a Deep Convolutional Neural Network Using Ultra-Wide-Field Fundus Ophthalmoscopy and Optical Coherence Tomography Angiography

    Get PDF
    Purpose. The present study aimed to compare the accuracy of diabetic retinopathy (DR) staging with a deep convolutional neural network (DCNN) using two different types of fundus cameras and composite images. Method. The study included 491 ultra-wide-field fundus ophthalmoscopy and optical coherence tomography angiography (OCTA) images that passed an image-quality review and were graded as no apparent DR (NDR; 169 images), mild nonproliferative DR (NPDR; 76 images), moderate NPDR (54 images), severe NPDR (90 images), and proliferative DR (PDR; 102 images) by three retinal experts by the International Clinical Diabetic Retinopathy Severity Scale. The findings of tests 1 and 2 to identify no apparent diabetic retinopathy (NDR) and PDR, respectively, were then assessed. For each verification, Optos, OCTA, and Optos OCTA imaging scans with DCNN were performed. Result. The Optos, OCTA, and Optos OCTA imaging test results for comparison between NDR and DR showed mean areas under the curve (AUC) of 0.79, 0.883, and 0.847; sensitivity rates of 80.9%, 83.9%, and 78.6%; and specificity rates of 55%, 71.6%, and 69.8%, respectively. Meanwhile, the Optos, OCTA, and Optos OCTA imaging test results for comparison between NDR and PDR showed mean AUC of 0.981, 0.928, and 0.964; sensitivity rates of 90.2%, 74.5%, and 80.4%; and specificity rates of 97%, 97%, and 96.4%, respectively. Conclusion. The combination of Optos and OCTA imaging with DCNN could detect DR at desirable levels of accuracy and may be useful in clinical practice and retinal screening. Although the combination of multiple imaging techniques might overcome their individual weaknesses and provide comprehensive imaging, artificial intelligence in classifying multimodal images has not always produced accurate results

    Automatic Diagnosis of Diabetic Retinopathy Stage Focusing Exclusively on Retinal Hemorrhage

    No full text
    Background and Objectives: The present study evaluated the detection of diabetic retinopathy (DR) using an automated fundus camera focusing exclusively on retinal hemorrhage (RH) using a deep convolutional neural network, which is a machine-learning technology. Materials and Methods: This investigation was conducted via a prospective and observational study. The study included 89 fundus ophthalmoscopy images. Seventy images passed an image quality review and were graded as showing no apparent DR (n = 51), mild nonproliferative DR (NPDR; n = 16), moderate NPDR (n = 1), severe NPDR (n = 1), and proliferative DR (n = 1) by three retinal experts according to the International Clinical Diabetic Retinopathy Severity scale. The RH numbers and areas were automatically detected and the results of two tests—the detection of mild-or-worse NPDR and the detection of moderate-or-worse NPDR—were examined. Results: The detection of mild-or-worse DR showed a sensitivity of 0.812 (95% confidence interval: 0.680–0.945), specificity of 0.888, and area under the curve (AUC) of 0.884, whereas the detection of moderate-or-worse DR showed a sensitivity of 1.0, specificity of 1.0, and AUC of 1.0. Conclusions: Automated diagnosis using artificial intelligence focusing exclusively on RH could be used to diagnose DR requiring ophthalmologist intervention

    Detailed summary of driving scores according to general, daytime, nighttime and adverse conditions (e.g., rain or traffic jams).

    No full text
    <p>The values are presented as the means (SE). The <i>p</i>-values were determined using Student’s <i>t</i>-test. The subjects evaluated for their driving scores were limited to those who drove daily (n = 79; 30 patients in the multifocal group; 49 patients in the monofocal group).</p

    Postoperative scores of the NEI VFQ-25.

    No full text
    <p>The values are presented as the means (SE) of n = 131 (46 patients in the multifocal group; 85 patients in the monofocal group), except where otherwise noted. The <i>p</i>-values were determined using Student’s <i>t</i>-test.</p
    corecore