16 research outputs found
Deep Learning for Predicting Refractive Error From Retinal Fundus Images
PURPOSE. We evaluate how deep learning can be applied to extract novel information such as
refractive error from retinal fundus imaging.
METHODS. Retinal fundus images used in this study were 45- and 30-degree field of view images
from the UK Biobank and Age-Related Eye Disease Study (AREDS) clinical trials, respectively.
Refractive error was measured by autorefraction in UK Biobank and subjective refraction in
AREDS. We trained a deep learning algorithm to predict refractive error from a total of
226,870 images and validated it on 24,007 UK Biobank and 15,750 AREDS images. Our model
used the ‘‘attention’’ method to identify features that are correlated with refractive error.
RESULTS. The resulting algorithm had a mean absolute error (MAE) of 0.56 diopters (95%
confidence interval [CI]: 0.55–0.56) for estimating spherical equivalent on the UK Biobank
data set and 0.91 diopters (95% CI: 0.89–0.93) for the AREDS data set. The baseline expected
MAE (obtained by simply predicting the mean of this population) was 1.81 diopters (95% CI:
1.79–1.84) for UK Biobank and 1.63 (95% CI: 1.60–1.67) for AREDS. Attention maps
suggested that the foveal region was one of the most important areas used by the algorithm to
make this prediction, though other regions also contribute to the prediction.
CONCLUSIONS. To our knowledge, the ability to estimate refractive error with high accuracy
from retinal fundus photos has not been previously known and demonstrates that deep
learning can be applied to make novel predictions from medical images
Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning
Diabetic eye disease is one of the fastest growing causes of preventable
blindness. With the advent of anti-VEGF (vascular endothelial growth factor)
therapies, it has become increasingly important to detect center-involved
diabetic macular edema (ci-DME). However, center-involved diabetic macular
edema is diagnosed using optical coherence tomography (OCT), which is not
generally available at screening sites because of cost and workflow
constraints. Instead, screening programs rely on the detection of hard exudates
in color fundus photographs as a proxy for DME, often resulting in high false
positive or false negative calls. To improve the accuracy of DME screening, we
trained a deep learning model to use color fundus photographs to predict
ci-DME. Our model had an ROC-AUC of 0.89 (95% CI: 0.87-0.91), which corresponds
to a sensitivity of 85% at a specificity of 80%. In comparison, three retinal
specialists had similar sensitivities (82-85%), but only half the specificity
(45-50%, p<0.001 for each comparison with model). The positive predictive value
(PPV) of the model was 61% (95% CI: 56-66%), approximately double the 36-38% by
the retinal specialists. In addition to predicting ci-DME, our model was able
to detect the presence of intraretinal fluid with an AUC of 0.81 (95% CI:
0.81-0.86) and subretinal fluid with an AUC of 0.88 (95% CI: 0.85-0.91). The
ability of deep learning algorithms to make clinically relevant predictions
that generally require sophisticated 3D-imaging equipment from simple 2D images
has broad relevance to many other applications in medical imaging
An integrated knowledge-based system for early detection of eye refractive error using data mining
Refractive error is one of optical defect in the human visual system. Refractive error is a very common disease these days in all populations and in all age groups. Uncorrected and undetected refractive error contributes to visual impairment, blindness and places a considerable burden on a person in the world. The long use of technological devices such as smart phones also poses a new burden on the human eye. The intensity and brightness of these digital devices open a new door for high prevalence of eye refractive errors. Early medical diagnosis of the disease may help in avoiding complications and blindness. Data mining algorithms can be applied to help in ophthalmology and detection of an eye disease at an early stage. So mining the ophthalmology data in efficient manner is a critical issue. This research work deals with development of an integrated knowledge-based system that helps to detect eye refractive error early and provides appropriate advice for the patients. In this study, the hybrid knowledge discovery process model of data mining that was developed for academic research is used. About 9000 ophthalmology data from selected eye health centers are used to build the model. The sample data was preprocessed for missing values, outliers, and noise. Then the model is built using decision tree (J48 and REPTree) and rule induction (JRip and part) algorithms. The part algorithm has registered better predictive performance with accuracy of 60% and 96.45% for subjective and objective based model evaluation, respectively as compared to J48, REPTree, and JRip. Finally, the knowledge discovered with this algorithm is further used to build the knowledge-based systems. The Java programing language is used to integrate data mining results to knowledge-based system. The performance of the proposed system is evaluated by preparing test cases. Overall, the knowledge based system resulted in 89.2% accuracy. Finally the study concludes that discovering knowledge using data mining techniques could be used as a functional eye refractive error detection system
Recommended from our members
Optimising Subjective Anterior Eye Grading Precision
Purpose:
To establish the optimum grading increment which ensured parity between practitioners while maximising clinical precision.
Methods:
Second year optometry students (n = 127, 19.5 ± 1.4 years, 55 % female) and qualified eye care practitioners (n = 61, 40.2 ± 14.8 years, 52 % female) had 30 s to grade each of bulbar, limbal and palpebral hyperaemia of the upper lid of 4 patients imaged live with a digital slit lamp under 16× magnification, diffuse illumination, with the image projected on a screen. The patients were presented in a randomised sequence 3 times in succession, during which the graders used the Efron printed grading scale once to the nearest 0.1 increment, once to nearest 0.5 increment and once to the nearest integer grade in a randomised order. Graders were masked to their previous responses.
Results:
For most grading conditions less than 20 % of clinicians showed a ≤0.1 difference in grade from the mean. In contrast, more than 50 % of the student graders and 40 % of experienced graders showed a difference in grade from the mean within 0.5 for all conditions under measurement. Student precision in grading was better with both 0.1 and 0.5 grading increments than grading to the nearest unit, except for limbal hyperaemia where they performed more accurately with 0.5 unit increment grading. Limbal grading precision was not affected by grading step increment for experienced practitioners, but 0.1 and 0.5 grading increments were both better than the 1.0 grading increment for bulbar hyperaemia and the 0.1 grading increment was better than the 0.5 grading increment and both were better than the 1.0 grading increment for palpebral hyperaemia.
Conclusion:
Although narrower interval scales maximise the ability to detect smaller clinical changes, the grading increment should not exceed one standard deviation of the discrepancy between measurements. Therefore, 0.5 grading increments are recommended for subjective anterior eye physiology grading (limbal, bulbar and palpebral redness)
Artificial intelligence and deep learning in ophthalmology
Artificial intelligence (AI) based on deep learning (DL) has sparked tremendous global interest in recent years. DL has been widely adopted in image recognition, speech recognition and natural language processing, but is only beginning to impact on healthcare. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography and visual fields, achieving robust classification performance in the detection of diabetic retinopathy and retinopathy of prematurity, the glaucoma-like disc, macular oedema and age-related macular degeneration. DL in ocular imaging may be used in conjunction with telemedicine as a possible solution to screen, diagnose and monitor major eye diseases for patients in primary care and community settings. Nonetheless, there are also potential challenges with DL application in ophthalmology, including clinical and technical challenges, explainability of the algorithm results, medicolegal issues, and physician and patient acceptance of the AI 'black-box' algorithms. DL could potentially revolutionise how ophthalmology is practised in the future. This review provides a summary of the state-of-the-art DL systems described for ophthalmic applications, potential challenges in clinical deployment and the path forward
Deep learning in ophthalmology: The technical and clinical considerations
The advent of computer graphic processing units, improvement in mathematical models and availability of big data has allowed artificial intelligence (AI) using machine learning (ML) and deep learning (DL) techniques to achieve robust performance for broad applications in social-media, the internet of things, the automotive industry and healthcare. DL systems in particular provide improved capability in image, speech and motion recognition as well as in natural language processing. In medicine, significant progress of AI and DL systems has been demonstrated in image-centric specialties such as radiology, dermatology, pathology and ophthalmology. New studies, including pre-registered prospective clinical trials, have shown DL systems are accurate and effective in detecting diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), retinopathy of prematurity, refractive error and in identifying cardiovascular risk factors and diseases, from digital fundus photographs. There is also increasing attention on the use of AI and DL systems in identifying disease features, progression and treatment response for retinal diseases such as neovascular AMD and diabetic macular edema using optical coherence tomography (OCT). Additionally, the application of ML to visual fields may be useful in detecting glaucoma progression. There are limited studies that incorporate clinical data including electronic health records, in AL and DL algorithms, and no prospective studies to demonstrate that AI and DL algorithms can predict the development of clinical eye disease. This article describes global eye disease burden, unmet needs and common conditions of public health importance for which AI and DL systems may be applicable. Technical and clinical aspects to build a DL system to address those needs, and the potential challenges for clinical adoption are discussed. AI, ML and DL will likely play a crucial role in clinical ophthalmology practice, with implications for screening, diagnosis and follow up of the major causes of vision impairment in the setting of ageing populations globally