5 research outputs found

    Visual Acuity

    No full text
    Objective: To examine the data quality and usability of visual acuity (VA) data extracted from an electronic health record (EHR) system during ophthalmology encounters and provide recommendations for consideration of relevant VA end points in retrospective analyses. Design: Retrospective, EHR data analysis. Participants: All patients with eyecare office encounters at any 1 of the 9 locations of a large academic medical center between August 1, 2013, and December 31, 2015. Methods: Data from 13 of the 21 VA fields (accounting for 93% VA data) in EHR encounters were extracted, categorized, recoded, and assessed for conformance and plausibility using an internal data dictionary, a 38-item listing of VA line measurements and observations including 28 line measurements (e.g., 20/30, 20/400) and 10 observations (e.g., no light perception). Entries were classified into usable and unusable data. Usable data were further categorized based on conformance to the internal data dictionary: (1) exact match; (2) conditional conformance, letter count (e.g., 20/30+2-3); (3) convertible conformance (e.g., 5/200 to 20/800); (4) plausible but cannot be conformed (e.g., 5/400). Data were deemed unusable when they were not plausible. Main Outcome Measures: Proportions of usable and unusable VA entries at the overall and subspecialty levels. Results: All VA data from 513 036 encounters representing 166 212 patients were included. Of the 1 573 643 VA entries, 1 438 661 (91.4%) contained usable data. There were 1 196 720 (76.0%) exact match (category 1), 185 692 (11.8%) conditional conformance (category 2), 40 270 (2.6%) convertible conformance (category 3), and 15 979 (1.0%) plausible but not conformed entries (category 4). Visual acuity entries during visits with providers from retina (17.5%), glaucoma (14.0%), neuro-ophthalmology (8.9%), and low vision (8.8%) had the highest rates of unusable data. Documented VA entries with providers from comprehensive eyecare (86.7%), oculoplastics (81.5%), and pediatrics/strabismus (78.6%) yielded the highest proportions of exact match with the data dictionary. Conclusions: Electronic health record VA data quality and usability vary across documented VA measures, observations, and eyecare subspecialty. We proposed a checklist of considerations and recommendations for planning, extracting, analyzing, and reporting retrospective study outcomes using EHR VA data. These are important first steps to standardize analyses enabling comparative research

    Visual Field Prediction

    No full text
    Purpose: Two novel deep learning methods using a convolutional neural network (CNN) and a recurrent neural network (RNN) have recently been developed to forecast future visual fields (VFs). Although the original evaluations of these models focused on overall accuracy, it was not assessed whether they can accurately identify patients with progressive glaucomatous vision loss to aid clinicians in preventing further decline. We evaluated these 2 prediction models for potential biases in overestimating or underestimating VF changes over time. Design: Retrospective observational cohort study. Participants: All available and reliable Swedish Interactive Thresholding Algorithm Standard 24-2 VFs from Massachusetts Eye and Ear Glaucoma Service collected between 1999 and 2020 were extracted. Because of the methods’ respective needs, the CNN data set included 54 373 samples from 7472 patients, and the RNN data set included 24 430 samples from 1809 patients. Methods: The CNN and RNN methods were reimplemented. A fivefold cross-validation procedure was performed on each model, and pointwise mean absolute error (PMAE) was used to measure prediction accuracy. Test data were stratified into categories based on the severity of VF progression to investigate the models’ performances on predicting worsening cases. The models were additionally compared with a no-change model that uses the baseline VF (for the CNN) and the last-observed VF (for the RNN) for its prediction. Main Outcome Measures: PMAE in predictions. Results: The overall PMAE 95% confidence intervals were 2.21 to 2.24 decibels (dB) for the CNN and 2.56 to 2.61 dB for the RNN, which were close to the original studies’ reported values. However, both models exhibited large errors in identifying patients with worsening VFs and often failed to outperform the no-change model. Pointwise mean absolute error values were higher in patients with greater changes in mean sensitivity (for the CNN) and mean total deviation (for the RNN) between baseline and follow-up VFs. Conclusions: Although our evaluation confirms the low overall PMAEs reported in the original studies, our findings also reveal that both models severely underpredict worsening of VF loss. Because the accurate detection and projection of glaucomatous VF decline is crucial in ophthalmic clinical practice, we recommend that this consideration is explicitly taken into account when developing and evaluating future deep learning models

    Assessing Resident Cataract Surgical Outcomes Using Electronic Health Record Data

    No full text
    Objective: To demonstrate that electronic health record (EHR) data can be used in an automated approach to evaluate cataract surgery outcomes. Design: Retrospective analysis. Subjects: Resident and faculty surgeons. Methods: Electronic health record data were collected from cataract surgeries performed at the Johns Hopkins Wilmer Eye Institute, and cases were categorized into resident or attending as primary surgeon. Preoperative and postoperative visual acuity (VA) and unplanned return to operating room were extracted from the EHR. Main Outcome Measures: Postoperative VA and reoperation rate within 90 days. Results: This study analyzed 14 537 cataract surgery cases over 32 months. Data were extracted from the EHR using an automated approach to assess surgical outcomes for resident and attending surgeons. Of 337 resident surgeries with both preoperative and postoperative VA data, 248 cases (74%) had better postoperative VA, and 170 cases (51%) had more than 2 lines improvement. There was no statistical difference in the proportion of cases with better postoperative VA or more than 2 lines improvement between resident and attending cases. Attending surgeons had a statistically greater proportion of cases with postoperative VA better than 20/40, but this finding has to be considered in the context that, on average, resident cases started out with poorer baseline VA.A multivariable regression model of VA outcomes vs. resident/attending status that controlled for preoperative VA, patient age, American Society of Anesthesiologists (ASA) score, and estimated income found that resident status, preoperative VA, patient age, ASA score, and estimated income were all significant predictors of VA. The rate of unplanned return to the operating room within 90 days of cataract surgery was not statistically different between resident (1.8%) and attending (1.2%) surgeons. Conclusions: This study demonstrates that EHR data can be used to evaluate and monitor surgical outcomes in an ongoing way. Analysis of EHR-extracted cataract outcome data showed that preoperative VA, ASA classification, and attending/resident status were important in predicting postoperative VA outcomes. These findings suggest that the utilization of EHR data could enable continuous assessment of surgical outcomes and inform interventions to improve resident training.Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references

    Advancing Toward a Common Data Model in Ophthalmology

    No full text
    Purpose: Evaluate the degree of concept coverage of the general eye examination in one widely used electronic health record (EHR) system using the Observational Health Data Sciences and Informatics Observational Medical Outcomes Partnership (OMOP) common data model (CDM). Design: Study of data elements. Participants: Not applicable. Methods: Data elements (field names and predefined entry values) from the general eye examination in the Epic foundation system were mapped to OMOP concepts and analyzed. Each mapping was given a Health Level 7 equivalence designation–equal when the OMOP concept had the same meaning as the source EHR concept, wider when it was missing information, narrower when it was overly specific, and unmatched when there was no match. Initial mappings were reviewed by 2 graders. Intergrader agreement for equivalence designation was calculated using Cohen’s kappa. Agreement on the mapped OMOP concept was calculated as a percentage of total mappable concepts. Discrepancies were discussed and a final consensus created. Quantitative analysis was performed on wider and unmatched concepts. Main Outcome Measures: Gaps in OMOP concept coverage of EHR elements and intergrader agreement of mapped OMOP concepts. Results: A total of 698 data elements (210 fields, 488 values) from the EHR were analyzed. The intergrader kappa on the equivalence designation was 0.88 (standard error 0.03, P < 0.001). There was a 96% agreement on the mapped OMOP concept. In the final consensus mapping, 25% (1% fields, 31% values) of the EHR to OMOP concept mappings were considered equal, 50% (27% fields, 60% values) wider, 4% (8% fields, 2% values) narrower, and 21% (52% fields, 8% values) unmatched. Of the wider mapped elements, 46% were missing the laterality specification, 24% had other missing attributes, and 30% had both issues. Wider and unmatched EHR elements could be found in all areas of the general eye examination. Conclusions: Most data elements in the general eye examination could not be represented precisely using the OMOP CDM. Our work suggests multiple ways to improve the incorporation of important ophthalmology concepts in OMOP, including adding laterality to existing concepts. There exists a strong need to improve the coverage of ophthalmic concepts in source vocabularies so that the OMOP CDM can better accommodate vision research. Financial Disclosure(s): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article
    corecore