23 research outputs found
Evaluating an automated machine learning model that predicts visual acuity outcomes in patients with neovascular age-related macular degeneration
PURPOSE: Neovascular age-related macular degeneration (nAMD) is a major global cause of blindness. Whilst anti-vascular endothelial growth factor (anti-VEGF) treatment is effective, response varies considerably between individuals. Thus, patients face substantial uncertainty regarding their future ability to perform daily tasks. In this study, we evaluate the performance of an automated machine learning (AutoML) model which predicts visual acuity (VA) outcomes in patients receiving treatment for nAMD, in comparison to a manually coded model built using the same dataset. Furthermore, we evaluate model performance across ethnic groups and analyse how the models reach their predictions. METHODS: Binary classification models were trained to predict whether patients' VA would be 'Above' or 'Below' a score of 70 one year after initiating treatment, measured using the Early Treatment Diabetic Retinopathy Study (ETDRS) chart. The AutoML model was built using the Google Cloud Platform, whilst the bespoke model was trained using an XGBoost framework. Models were compared and analysed using the What-if Tool (WIT), a novel model-agnostic interpretability tool. RESULTS: Our study included 1631 eyes from patients attending Moorfields Eye Hospital. The AutoML model (area under the curve [AUC], 0.849) achieved a highly similar performance to the XGBoost model (AUC, 0.847). Using the WIT, we found that the models over-predicted negative outcomes in Asian patients and performed worse in those with an ethnic category of Other. Baseline VA, age and ethnicity were the most important determinants of model predictions. Partial dependence plot analysis revealed a sigmoidal relationship between baseline VA and the probability of an outcome of 'Above'. CONCLUSION: We have described and validated an AutoML-WIT pipeline which enables clinicians with minimal coding skills to match the performance of a state-of-the-art algorithm and obtain explainable predictions
AutoMorph: Automated Retinal Vascular Morphology Quantification Via a Deep Learning Pipeline
Purpose: To externally validate a deep learning pipeline (AutoMorph) for automated analysis of retinal vascular morphology on fundus photographs. AutoMorph has been made publicly available, facilitating widespread research in ophthalmic and systemic diseases. Methods: AutoMorph consists of four functional modules: image preprocessing, image quality grading, anatomical segmentation (including binary vessel, artery/vein, and optic disc/cup segmentation), and vascular morphology feature measurement. Image quality grading and anatomical segmentation use the most recent deep learning techniques. We employ a model ensemble strategy to achieve robust results and analyze the prediction confidence to rectify false gradable cases in image quality grading. We externally validate the performance of each module on several independent publicly available datasets. Results: The EfficientNet-b4 architecture used in the image grading module achieves performance comparable to that of the state of the art for EyePACS-Q, with an F1-score of 0.86. The confidence analysis reduces the number of images incorrectly assessed as gradable by 76%. Binary vessel segmentation achieves an F1-score of 0.73 on AV-WIDE and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph segmentation map and expert annotation show good to excellent agreement. Conclusions: AutoMorph modules perform well even when external validation data show domain differences from training data (e.g., with different imaging devices). This fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis of retinal vascular morphology on color fundus photographs. Translational Relevance: By making AutoMorph publicly available and open source, we hope to facilitate ophthalmic and systemic disease research, particularly in the emerging field of oculomics
Clinician-Driven AI: Code-Free Self-Training on Public Data for Diabetic Retinopathy Referral
Importance: Democratizing artificial intelligence (AI) enables model development by clinicians with a lack of coding expertise, powerful computing resources, and large, well-labeled data sets.
//
Objective: To determine whether resource-constrained clinicians can use self-training via automated machine learning (ML) and public data sets to design high-performing diabetic retinopathy classification models.
//
Design, Setting, and Participants: This diagnostic quality improvement study was conducted from January 1, 2021, to December 31, 2021. A self-training method without coding was used on 2 public data sets with retinal images from patients in France (Messidor-2 [n = 1748]) and the UK and US (EyePACS [n = 58 689]) and externally validated on 1 data set with retinal images from patients of a private Egyptian medical retina clinic (Egypt [n = 210]). An AI model was trained to classify referable diabetic retinopathy as an exemplar use case. Messidor-2 images were assigned adjudicated labels available on Kaggle; 4 images were deemed ungradable and excluded, leaving 1744 images. A total of 300 images randomly selected from the EyePACS data set were independently relabeled by 3 blinded retina specialists using the International Classification of Diabetic Retinopathy protocol for diabetic retinopathy grade and diabetic macular edema presence; 19 images were deemed ungradable, leaving 281 images. Data analysis was performed from February 1 to February 28, 2021.
//
Exposures: Using public data sets, a teacher model was trained with labeled images using supervised learning. Next, the resulting predictions, termed pseudolabels, were used on an unlabeled public data set. Finally, a student model was trained with the existing labeled images and the additional pseudolabeled images.
Main Outcomes and Measures: The analyzed metrics for the models included the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, and F1 score. The Fisher exact test was performed, and 2-tailed P values were calculated for failure case analysis.
//
Results: For the internal validation data sets, AUROC values for performance ranged from 0.886 to 0.939 for the teacher model and from 0.916 to 0.951 for the student model. For external validation of automated ML model performance, AUROC values and accuracy were 0.964 and 93.3% for the teacher model, 0.950 and 96.7% for the student model, and 0.890 and 94.3% for the manually coded bespoke model, respectively.
//
Conclusions and Relevance: These findings suggest that self-training using automated ML is an effective method to increase both model performance and generalizability while decreasing the need for costly expert labeling. This approach advances the democratization of AI by enabling clinicians without coding expertise or access to large, well-labeled private data sets to develop their own AI models
Determinants of non-attendance at face-to-face and telemedicine ophthalmic consultations
BACKGROUND/AIMS: Evaluation of telemedicine care models has highlighted its potential for exacerbating healthcare inequalities. This study seeks to identify and characterise factors associated with non-attendance across face-to-face and telemedicine outpatient appointments. METHODS: A retrospective cohort study at a tertiary-level ophthalmic institution in the UK, between 1 January 2019 and 31 October 2021. Logistic regression modelled non-attendance against sociodemographic, clinical and operational exposure variables for all new patient registrations across five delivery modes: asynchronous, synchronous telephone, synchronous audiovisual and face to face prior to the pandemic and face to face during the pandemic. RESULTS: A total of 85 924 patients (median age 55 years, 54.4% female) were newly registered. Non-attendance differed significantly by delivery mode: (9.0% face to face prepandemic, 10.5% face to face during the pandemic, 11.7% asynchronous and 7.8%, synchronous during pandemic). Male sex, greater levels of deprivation, a previously cancelled appointment and not self-reporting ethnicity were strongly associated with non-attendance across all delivery modes. Individuals identifying as black ethnicity had worse attendance in synchronous audiovisual clinics (adjusted OR 4.24, 95% CI 1.59 to 11.28) but not asynchronous. Those not self-reporting their ethnicity were from more deprived backgrounds, had worse broadband access and had significantly higher non-attendance across all modes (all p<0.001). CONCLUSION: Persistent non-attendance among underserved populations attending telemedicine appointments highlights the challenge digital transformation faces for reducing healthcare inequalities. Implementation of new programmes should be accompanied by investigation into the differential health outcomes of vulnerable populations
AlzEye: longitudinal record-level linkage of ophthalmic imaging and hospital admissions of 353 157 patients in London, UK
PURPOSE: Retinal signatures of systemic disease ('oculomics') are increasingly being revealed through a combination of high-resolution ophthalmic imaging and sophisticated modelling strategies. Progress is currently limited not mainly by technical issues, but by the lack of large labelled datasets, a sine qua non for deep learning. Such data are derived from prospective epidemiological studies, in which retinal imaging is typically unimodal, cross-sectional, of modest number and relates to cohorts, which are not enriched with subpopulations of interest, such as those with systemic disease. We thus linked longitudinal multimodal retinal imaging from routinely collected National Health Service (NHS) data with systemic disease data from hospital admissions using a privacy-by-design third-party linkage approach.
PARTICIPANTS: Between 1 January 2008 and 1 April 2018, 353 157 participants aged 40 years or older, who attended Moorfields Eye Hospital NHS Foundation Trust, a tertiary ophthalmic institution incorporating a principal central site, four district hubs and five satellite clinics in and around London, UK serving a catchment population of approximately six million people.
FINDINGS TO DATE: Among the 353 157 individuals, 186 651 had a total of 1 337 711 Hospital Episode Statistics admitted patient care episodes. Systemic diagnoses recorded at these episodes include 12 022 patients with myocardial infarction, 11 735 with all-cause stroke and 13 363 with all-cause dementia. A total of 6 261 931 retinal images of seven different modalities and across three manufacturers were acquired from 1 54 830 patients. The majority of retinal images were retinal photographs (n=1 874 175) followed by optical coherence tomography (n=1 567 358).
FUTURE PLANS: AlzEye combines the world's largest single institution retinal imaging database with nationally collected systemic data to create an exceptional large-scale, enriched cohort that reflects the diversity of the population served. First analyses will address cardiovascular diseases and dementia, with a view to identifying hidden retinal signatures that may lead to earlier detection and risk management of these life-threatening conditions
A foundation model for generalizable disease detection from retinal images
Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders 1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications 2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging.</p
A foundation model for generalizable disease detection from retinal images
Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging
Periodontitis and outer retinal thickness:A cross-sectional analysis of the UK Biobank cohort
PurposePeriodontitis, a ubiquitous severe gum disease affecting the teeth and surrounding alveolar bone can heighten systemic inflammation. We investigated the association between very severe periodontitis and early biomarkers of age-related macular degeneration, in individuals with no eye disease.DesignCross-sectional analysis of the prospective community-based cohort United Kingdom (UK) Biobank.ParticipantsSixty-seven thousand three hundred eleven UK residents aged 40-70 years recruited between 2006-2010 underwent retinal imaging.MethodsMacular-centered optical coherence tomography images acquired at the baseline visit were segmented for retinal sublayer thicknesses. Very severe periodontitis was ascertained through a touchscreen questionnaire. Linear mixed effects regression modeled the association between very severe periodontitis and retinal sublayer thicknesses adjusting for age, sex, ethnicity, socioeconomic status, alcohol consumption, smoking status, diabetes mellitus, hypertension, refractive error, and previous cataract surgery.Main Outcome MeasuresPhotoreceptor layer (PRL) and retinal pigment epithelium-Bruch’s membrane (RPE-BM) thicknesses.ResultsAmong 36,897 participants included in the analysis, 1,571 (4.3%) reported very severe periodontitis. Affected individuals were older, lived in areas of greater socioeconomic deprivation and were more likely to be hypertensive, diabetic and current smokers (all p<0.001). On average, those with very severe periodontitis were myopic (-0.29 ± 2.40 diopters) while those unaffected were hyperopic (0.05 ± 2.27 diopters, p<0.001). Following adjusted analysis, very severe periodontitis was associated with thinner PRL (-0.55 μm, 95% CI: -0.97, -0.12, p=0.022) but there was no difference in RPE-BM thickness (0.00 μm, 95% CI: -0.12, 0.13, p=0.97). The association between PRL thickness and very severe periodontitis was modified by age (p<0.001). Stratifying individuals by age, thinner PRL was seen among those aged 60-69 years with disease (-1.19 μm, 95% CI: -1.85, -0.53, p<0.001) but not among those under 60 years.ConclusionsAmong those with no known eye disease, very severe periodontitis is statistically associated with a thinner PRL, consistent with incipient age-related macular degeneration. Optimizing oral hygiene may hold additional relevance for people at risk of degenerative retinal disease
Association Between Retinal Features From Multimodal Imaging and Schizophrenia
Importance:
The potential association of schizophrenia with distinct retinal changes is of clinical interest but has been challenging to investigate because of a lack of sufficiently large and detailed cohorts./
Objective: To investigate the association between retinal biomarkers from multimodal imaging (oculomics) and schizophrenia in a large real-world population./
Design, Setting, and Participants: This cross-sectional analysis used data from a retrospective cohort of 154 830 patients 40 years and older from the AlzEye study, which linked ophthalmic data with hospital admission data across England. Patients attended Moorfields Eye Hospital, a secondary care ophthalmic hospital with a principal central site, 4 district hubs, and 5 satellite clinics in and around London, United Kingdom, and had retinal imaging during the study period (January 2008 and April 2018). Data were analyzed from January 2022 to July 2022./
Main Outcomes and Measures: Retinovascular and optic nerve indices were computed from color fundus photography. Macular retinal nerve fiber layer (RNFL) and ganglion cell–inner plexiform layer (mGC-IPL) thicknesses were extracted from optical coherence tomography. Linear mixed-effects models were used to examine the association between schizophrenia and retinal biomarkers./
Results: A total of 485 individuals (747 eyes) with schizophrenia (mean [SD] age, 64.9 years [12.2]; 258 [53.2%] female) and 100 931 individuals (165 400 eyes) without schizophrenia (mean age, 65.9 years [13.7]; 53 253 [52.8%] female) were included after images underwent quality control and potentially confounding conditions were excluded. Individuals with schizophrenia were more likely to have hypertension (407 [83.9%] vs 49 971 [48.0%]) and diabetes (364 [75.1%] vs 28 762 [27.6%]). The schizophrenia group had thinner mGC-IPL (−4.05 μm, 95% CI, −5.40 to −2.69; P = 5.4 × 10−9), which persisted when investigating only patients without diabetes (−3.99 μm; 95% CI, −6.67 to −1.30; P = .004) or just those 55 years and younger (−2.90 μm; 95% CI, −5.55 to −0.24; P = .03). On adjusted analysis, retinal fractal dimension among vascular variables was reduced in individuals with schizophrenia (−0.14 units; 95% CI, −0.22 to −0.05; P = .001), although this was not present when excluding patients with diabetes./
Conclusions and Relevance: In this study, patients with schizophrenia had measurable differences in neural and vascular integrity of the retina. Differences in retinal vasculature were mostly secondary to the higher prevalence of diabetes and hypertension in patients with schizophrenia. The role of retinal features as adjunct outcomes in patients with schizophrenia warrants further investigation.