57 research outputs found

    Study of Optimal Perimetric Testing In Children (OPTIC): Normative visual field values in children

    Get PDF
    Purpose: We sought to define normative visual field (VF) values for children using common clinical test protocols for kinetic and static perimetry. Design: Prospective, observational study. Subjects: We recruited 154 children aged 5 to 15 years without any ophthalmic condition that would affect the VF (controls) from pediatric clinics at Moorfields Eye Hospital. Methods: Children performed perimetric assessments in a randomized order using Goldmann and Octopus kinetic perimetry, and Humphrey static perimetry (Swedish Interactive Thresholding Algorithm [SITA] 24-2 FAST), in a single sitting, using standardized clinical protocols, with assessment by a single examiner. Unreliable results (assessed qualitatively) were excluded from the normative data analysis. Linear, piecewise, and quantile mixed-effects regression models were used. We developed a method to display age-specific normative isopters graphically on a VF plot to aid interpretation. Main Outcome Measures: Summary measures and graphical plots describing normative VF data for 3 common perimetric tests. Results: Visual field area increased with age on testing with Goldmann isopters III4e, I4e, and I2e (linear regression; P < 0.001) and for Octopus isopters III4e and I4e (linear regression; P < 0.005). Visual field development occurs predominately in the infero-temporal field. Humphrey mean deviation (MD) showed an increase of 0.3 decibels (dB; 95% CI, 0.21-0.40) MD per year up to 12 years of age, when adult MD values were reached and thereafter maintained. Conclusions: Visual field size and sensitivity increase with age in patterns that are specific to the perimetric approach used. These developmental changes should be accounted for when interpreting perimetric test results in children, particularly when monitoring change over time

    Active surveillance of visual impairment due to adverse drug reactions: findings from a national study in the United Kingdom

    Get PDF
    As visual impairment (VI) due to adverse drug reactions (ADR) is rare in adults and children, there is an incomplete evidence base to inform guidance for screening and for counseling patients on the potential risks of medications. We report on suspected drugs and the eye conditions found in a national study of incidence of diagnosis of visual impairment due to suspected ADR. Case ascertainment was via the British Ophthalmological Surveillance Unit (BOSU), between March 2010 and February 2012, with follow-up after 6 months

    Visual Function, Social Position, and Health and Life Chances: The UK Biobank Study

    Get PDF
    Importance: The adverse impact of visual impairment and blindness and correlations with socioeconomic position are known. Understanding of the effect of the substantially more common near-normal vision (mild impairment) and associations with social position as well as health and life chances is limited. Objective: To investigate the association of visual health (across the full acuity spectrum) with social determinants of general health and the association between visual health and health and social outcomes. Design, Setting, and Participants: A cross-sectional epidemiologic study was conducted using UK Biobank data from 6 regional centers in England and Wales. A total of 112 314 volunteers (aged 40-73 years) were assessed in June 2009 and July 2010. Data analysis was performed from May 20, 2013, to November 19, 2014. Main Outcomes and Measures: Habitual (correction if prescribed) distance visual acuity was used to assign participants to 1 of 8 categories from bilateral normal visual acuity (logMAR, 0.2 or better; Snellen equivalent, 6/9.5 or better) to visual impairment or blindness (logMAR, 0.5 or worse; Snellen equivalent, 6/19 or worse) using World Health Organization and International Statistical Classification of Diseases and Related Health Problems, Tenth Revision taxonomy. Relationships between vision, key social determinants and health and social outcomes (including the main factors that define an individual's life-the social, economic, educational, and employment opportunities and outcomes experienced by individuals during their life course) were examined using multivariable regression. Results: Of the of 112 314 participants, 61 169 were female (54.5%); mean (SD) age was 56.8 (8.1) years. A total of 759 (0.7%) of the participants had visual impairment or blindness, and an additional 25 678 (22.9%) had reduced vision in 1 or both eyes. Key markers of social position were independently associated with vision in a gradient across acuity categories; in a gradient of increasing severity, all-cause impaired visual function was associated with adverse social outcomes and impaired general and mental health. These factors, including having no educational qualifications (risk ratio [RR], 1.86 [95% CI, 1.69-2.04]), having a higher deprivation score (RR, 1.08 [95% CI, 1.07-1.09]), and being in a minority ethnic group (eg, Asian) (RR, 2.05 [95% CI, 1.83-2.30]), were independently associated with being in the midrange vision category (at legal threshold for driving). This level of vision was associated with an increased risk of being unemployed (RR, 1.55 [95% CI, 1.31-1.84]), having a lower-status job (RR, 1.24 [95% CI, 1.09-1.41]), living alone (RR, 1.24 [95% CI, 1.10-1.39]), and having mental health problems (RR, 1.12 [95% CI, 1.04-1.20]). Conclusions and Relevance: Impaired vision in adults is common, and even near-normal vision, potentially unrecognized without assessment, has a tangible influence on quality of life. Because inequalities in visual health by social position mirror other health domains, inclusion of vision in generic initiatives addressing health inequalities could address the existing significant burden of underrecognized and/or latent visual disability. Longitudinal investigations are needed to elucidate pathophysiologic pathways and target modifiable mechanisms

    Laser refractive surgery in the UK Biobank study: Frequency, distribution by sociodemographic factors, and general health, happiness, and social participation outcomes

    Get PDF
    PURPOSE: To evaluate the frequency and distribution of laser refractive surgery in the United Kingdom by sociodemographic factors and outcomes of social participation and well-being. SETTING: Six regional recruitment centers in England and Wales. DESIGN: Cross-sectional epidemiological study. METHODS: Data were collected on sociodemographic factors and medical history; self-report on eyes/vision included reason for wearing optical correction, eye diseases, and treatment received (including refractive laser surgery). Mean spherical equivalent was used to categorize individuals as myopic (+1.0 diopter). RESULTS: Between 2009 and 2010, 117 281 subjects recruited by UK Biobank undertook an ophthalmic assessment, including autorefraction. Of those with refractive error within a range eligible for laser refractive surgery (n = 60 352), 1892 (3.1%) reported having bilateral refractive surgery and 549 (0.9%) unilateral surgery. Frequency of bilateral surgery decreased with increasing age and was higher in women. Frequency did not vary with educational attainment or accommodation status but increased with income among working age adults. Social participation, for example, regular visits to a pub or social club, was more common among those who underwent surgery. Other eye conditions were reported by 28% of those reporting refractive surgery compared with 11% of those eligible for treatment but not reporting surgery. CONCLUSION: This study provides information not available routinely on the frequency and distribution of laser refractive surgery in an adult UK population. A high frequency of ocular conditions conventionally considered contraindications to laser refractive surgery raises the possibility that extant guidance on patient selection may not be followed

    Accuracy and Utility of Self-report of Refractive Error

    Get PDF
    Importance Large-scale generic epidemiological studies offer detailed information on potential risk factors for refractive error across the life course, often lacking in ophthalmology-specific studies. However, ophthalmic examination to determine refractive error phenotype is challenging and costly thus, in that context, refractive status is commonly assigned using questionnaires. In a population survey there is often only scope to include a few condition-specific self-reported questions so it is critical that the questions used are effective in both ‘ruling in’ those who have the trait of interest and ‘ruling out’ those without it. Objective We determined the accuracy of identification of refractive status using self-reported age and/or reason for first wearing optical correction. Design UK Biobank study: cross-sectional epidemiological study. Setting Six regional centres in England and Wales. Participants 117,278 participants, aged 40–69 years in 2009/10. Main outcome and Measures Subjects had autorefraction measurement of refractive status. Spherical equivalent (SphEqu) on the more ‘extreme’ eye was used to categorise myopia (SphEqu ≤-1diopter) and hypermetropia (SphEqu ≥+1diopter). Sensitivity and specificity of reason for optical correction were assessed, using autorefraction as the gold-standard. ROC curves assessed the accuracy of self-reported age of first wearing optical correction and incremental improvement with additional information on the reason. Results Of those reporting using glasses/contact lenses, 92,121/95,240 (97%) gave age at first use and 93,156 (98%) the reason. For myopia, sensitivity of reason for optical correction was 89.1% [88.7, 89.4], specificity 83.7% [83.4, 84.0] and positive and negative predictive values were 72.7% [72.2, 73.1] and 94% [93.8, 94.2] respectively. The area under the curve (AUC) was 0.829 [0.826, 0.831], improving to 0.928 [0.926, 0.930] with combined information. By contrast self-report of reason for optical correction for hypermetropia had low sensitivity (38.1% [37.6, 38.6]) and the AUC with combined information was 0.71 [0.709, 0.716]. Conclusions and Relevance In combination, self-report of reason for and age at first use of optical correction are accurate in identifying myopia. These findings indicate an agreed set of questions could be implemented effectively in large-scale generic population-based studies, to increase opportunities for integrated research on refractive error to develop novel prevention or treatment strategies

    Frequency and Distribution of Refractive Error in Adult Life: Methodology and Findings of the UK Biobank Study

    Get PDF
    PURPOSE: To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. METHODS: U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40-69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. RESULTS: Fifty four percent of participants aged 40-69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40-44 years increasing to 46% at 65-69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). CONCLUSIONS: Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study

    Methods of ascertainment of children and young people living with diabetes mellitus: a mapping exercise of National Health Service diabetic eye screening programmes

    Get PDF

    Study of Optimal Perimetric Testing In Children (OPTIC): Development and feasibility of the kinetic perimetry reliability measure (KPRM)

    Get PDF
    INTRODUCTION: Interpretation of perimetric findings, particularly in children, relies on accurate assessment of test reliability, yet no objective measures of reliability exist for kinetic perimetry. We developed the kinetic perimetry reliability measure (KPRM), a quantitative measure of perimetric test reproducibility/reliability and report here its feasibility and association with subjective assessment of reliability. METHODS: Children aged 5-15 years, without an ophthalmic condition that affects the visual field, were recruited from Moorfields Eye Hospital and underwent Goldmann perimetry as part of a wider research programme on perimetry in children. Subjects were tested with two isopters and the blind spot was plotted, followed by a KPRM. Test reliability was also scored qualitatively using our examiner-based assessment of reliability (EBAR) scoring system, which standardises the conventional clinical approach to assessing test quality. The relationship between KPRM and EBAR was examined to explore the use of KPRM in assessing reliability of kinetic fields. RESULTS: A total of 103 children (median age 8.9 years; IQR: 7.1 to 11.8 years) underwent Goldmann perimetry with KPRM and EBAR scoring. A KPRM was achieved by all children. KPRM values increased with reducing test quality (Kruskal-Wallis, p=0.005), indicating greater testretest variability, and reduced with age (linear regression, p=0.015). One of 103 children (0.97%) demonstrated discordance between EBAR and KPRM. CONCLUSION: KPRM and EBAR are distinct but complementary approaches. Though scores show excellent agreement, KPRM is able to quantify withintest variability, providing data not captured by subjective assessment. Thus, we suggest combining KPRM with EBAR to aid interpretation of kinetic perimetry test reliability in children
    corecore