381 research outputs found
Recommended from our members
Initial experience in self-monitoring of intraocular pressure.
Background/aims: Diurnal variation in intraocular pressure (IOP) is a routine assessment in glaucoma management. Providing patients the opportunity to perform self-tonometry might empower them and free hospital resource. We previously demonstrated that 74% of patients can use the Icare® HOME tonometer. This study further explores Icare® HOME patient self-monitoring.
Methods: Patients were trained by standard protocol to use the Icare® HOME rebound tonometer. Patient self-tonometry was compared to Goldmann applanation tonometry (GAT) over one clinical day. Following this, each patient was instructed to undertake further data collection that evening and over the subsequent two days.
Results: Eighteen patients (35 eyes) participated. Good agreement was demonstrated between GAT and Icare® HOME for IOPs up to 15 mm Hg. Above this IOP the Icare® tended to over-read, largely explained by 2 patients with corneal thickness >600 um. The mean peak IOP during ‘clinic hours’ phasing was 16.7 mm Hg and 18.5 mm Hg (p = 0.24) over three days. An average range of 5.0, 7.0 and 9.8 mm Hg was shown during single day clinic, single day home and three day home phasing respectively (p =<0.001). The range of IOP was lower in eyes with prior trabeculectomy (6.1 mm Hg vs 12.2 mm Hg). All patients undertook one reading in the early morning at home with an average of 4.8 readings during, and 3.1 readings after office hours.
Conclusions: This small study shows that self-tonometry is feasible. The findings from home phasing demonstrated higher peak and trough IOPs, providing additional clinical information. Home phasing is a viable alternative. The cost-effectiveness of this approach has yet to be addressed
Recommended from our members
Blue-light filtering spectacle lenses for visual performance, sleep, and macular health in adults
This is a protocol for a Cochrane Review (Intervention). The objectives are as follows:
To assess whether blue-light filtering spectacle lenses impart effects on visual function, provide protection to the macula, or both. We will also examine potential effects on the sleep-wake cycle
Recommended from our members
Evaluation of a new rebound tonometer for self-measurement of intraocular pressure
Background/aims
To compare the accuracy of self-obtained, partner-obtained and trainer-obtained measurements using the handheld Icare Home rebound tonometer with Goldmann applanation tonometry (GAT), and to evaluate the acceptability to subjects of Icare Home measurement.
Methods
76 subjects were trained to use Icare Home for self-measurement using a standardised protocol. A prespecified checklist was used to assess the ability of a subject to perform self-tonometry. Accuracy of Icare Home self-measurement was compared with GAT using one eye per subject, randomly selected. Bland-Altman difference analysis was used to compare Icare Home and GAT intraocular pressure (IOP) estimates. Acceptability of self-tonometry was evaluated using a questionnaire.
Results
56 subjects (74%, 95% CI 64 to 84) were able to correctly perform self-tonometry. Mean bias (95% limits of agreement) was 0.3 mm Hg (−4.6 to 5.2), 1.1 mm Hg (−3.2 to 5.3) and 1.2 mm Hg (−3.9 to 6.3) for self-assessment, partner-assessment and trainer-assessment, respectively, suggesting underestimation of IOP by Icare Home tonometry. Differences between GAT and Icare Home IOP were greater for central corneal thickness below 500 µm and above 600 µm than data points within this range. Acceptability questionnaire responses showed high agreement that the self-pressure device was easy to use (84%), the reading was quick to obtain (88%) and the measurement was comfortable (95%).
Conclusions
Icare Home tonometry can be used for self-measurement by a majority of trained subjects. IOP measurements obtained using Icare Home tonometry by self-assessment and third party-assessment showed slight underestimation compared with GAT
Role of advanced technology in the detection of sight-threatening eye disease in a UK community setting.
Background/aims: To determine the performance of combinations of structural and functional screening tests in detecting sight-threatening eye disease in a cohort of elderly subjects recruited from primary care. Methods: 505 subjects aged ≥60 years underwent frequency doubling technology (FDT) perimetry, iVue optical coherence tomography (iWellness and peripapillary retinal nerve fibre layer (RNFL) scans) and intraocular pressure with the Ocular Response Analyzer, all performed by an ophthalmic technician. The reference standard was a full ophthalmic examination by an experienced clinician who was masked to the index test results. Subjects were classified as presence or absence of sight-threatening eye disease (clinically significant cataract, primary open-angle glaucoma, intermediate or advanced age-related macular degeneration and significant diabetic retinopathy). Univariate and multivariate logistic regression analyses were used to determine the association between abnormal screening test results and the presence of sight-threatening eye disease. Results: 171 subjects (33.8%) had one or more sight-threatening eye diseases. The multivariate analysis found significant associations with any of the target conditions for visual acuity of <6/12, an abnormal FDT and peripapillary RNFL thickness outside the 99% normal limit. The sensitivity of this optimised screening panel was 61.3% (95% CI 53.5 to 68.7), with a specificity of 78.8% (95% CI 74.0 to 83.1), a positive predictive value of 59.5% (95% CI 53.7 to 65.2) and an overall diagnostic accuracy of 72.9% (95% CI 68.8 to 76.8). Conclusions: A subset of screening tests may provide an accurate and efficient means of population screening for significant eye disease in the elderly. This study provides useful preliminary data to inform the development of further larger, multicentre screening studies to validate this screening panel
Noncontact Screening Methods for the Detection of Narrow Anterior Chamber Angles
PURPOSE: Comparing diagnostic accuracy of biomicroscope techniques (van Herick and Smith's tests, evaluating limbal and central anterior chamber depth, respectively) and advanced imaging (Visante OCT and Pentacam) for detection of gonioscopically narrow anterior chamber angles (ACAs).
METHODS: A total of 78 subjects with narrow or open ACAs underwent four index tests, performed on both eyes by examiners masked to other test results. Diagnostic performance was compared with gonioscopy, using International Society of Geographical and Epidemiological Ophthalmology (ISGEO) definition of primary angle closure and a classification based on clinical opinion of occludability. Data were analyzed using both the eye and the individual as unit of analysis. Sensitivity, specificity, and partial area under the receiver operating characteristic curve (AUROC) were generated.
RESULTS: Using the eye as the unit of analysis, the van Herick grading cutoff of 25% or less and ISGEO gonioscopic classification achieved 80% (confidence interval [CI] 65 to 89) sensitivity and 92% specificity (CI 80 to 97) for narrow angle detection, with specificity reaching 97% (CI 87 to 100) for a cutoff of less than or equal to 15%. Notably, with a gonioscopic classification based on clinical opinion of occludability, van Herick (≤25%) together with Smith's test (≤2.50 mm) detected 100% of narrow angle subjects. Of the three Pentacam parameters, anterior chamber volume achieved highest test sensitivity of 85% (CI 70 to 94) using the ISGEO definition. Visante OCT ACA had greatest partial AUROC at 90% specificity, also yielding sensitivity and specificity greater than 85% using the Youden-derived cutoff of less than or equal to 20.7°and ISGEO definition.
CONCLUSIONS: Van Herick test and Visante OCT ACA exhibited best discrimination between narrow and open angles both alone, and in combination. Van Herick test affords advantages over Visante OCT, showing potential for identifying individuals who may benefit from further gonioscopic assessment in a case-finding or screening setting
Recommended from our members
Care pathways for glaucoma detection and monitoring in the UK
Glaucoma presents considerable challenges in providing clinically and cost-effective care pathways. While UK population screening is not seen as justifiable, arrangements for case finding have historically been considered relatively ineffective. Detection challenges include an undetected disease burden, whether from populations failing to access services or difficulties in delivering effective case-finding strategies, and a high false positive rate from referrals via traditional case finding pathways. The enhanced General Ophthalmic Service (GOS) in Scotland and locally commissioned glaucoma referral filtering services (GRFS) elsewhere have undoubtedly reduced false positive referrals, and there is emerging evidence of effectiveness of these pathways. At the same time, it is recognised that implementing GRFS does not intrinsically reduce the burden of undetected glaucoma and late presentation, and obvious challenges remain. In terms of diagnosis and monitoring, considerable growth in capacity remains essential, and non-medical health care professional (HCP) co-management and virtual clinics continue to be important solutions in offering requisite capacity. National guidelines, commissioning recommendations, and the Common Clinical Competency Framework have clarified requirements for such services, including recommendations on training and accreditation of HCPs. At the same time, the nature of consultant-delivered care and expectations on the glaucoma specialist's role has evolved alongside these developments. Despite progress in recent decades, given projected capacity requirements, further care pathways innovations appear mandated. While the timeline for implementing potential artificial intelligence innovations in streamlining care pathways is far from established, the glaucoma burden presents an expectation that such developments will need to be at the vanguard of future developments
Is cardiac surgery warranted in children with Down syndrome? A case-controlled review
Objectives. To compare children with Down syndrome and children without Down syndrome and investigate whether there is a significant difference in the burden that is placed on the health care system between these two groups only in respect of the repair of congenital heart disease at Red Cross War Memorial Children’s Hospital, Cape Town, South Africa. Design. This study is a retrospective case control review. Setting. Red Cross War Memorial Children’s Hospital, Cape Town, South Africa.Subjects. The sample group of 50 Down syndrome children who had received cardiac surgery between January 1998 and June 2003 was compared with a control group of 50 nonsyndromic children who had received cardiac surgery during the same period. Outcome measures. Sex and diagnoses (cardiac and noncardiac), number of days spent in hospital and in ICU, complication rates, re-operation rates, early mortality rates, planned further cardiac surgery. Costs of these outcomes were not quantified in exact monetary terms. Results. There was no significant difference between the two groups in terms of the burden that was placed on the health care system. Similar complication rates, re-operation rates and early mortality rates were recorded for both groups. The Down syndrome group appeared to benefit more from cardiac surgery than the non-Down syndrome group. Conclusion. Denying cardiac surgery to children with Down syndrome does not improve the efficiency of resource allocation. It is therefore not reasonable to suggest that the problem of scarce resources can be ameliorated by discriminating against children with Down syndrome
Trends in diabetic retinopathy screening attendance and associations with vision impairment attributable to diabetes in a large nationwide cohort
AIMS: To investigate diabetic retinopathy screening attendance and trends in certified vision impairment caused by diabetic eye disease. METHODS: This was a retrospective study of attendance in three urban UK diabetic eye screening programmes in England. A survival analysis was performed to investigate time from diagnosis to first screen by age and sex. Logistic regression analysis of factors influencing screening attendance during a 15-month reporting period was conducted, as well as analysis of new vision impairment certifications (Certificate of Vision Impairment) in England and Wales from 2009 to 2019. RESULTS: Of those newly registered in the Routine Digital Screening pathway (n = 97 048), 80% attended screening within the first 12 months and 88% by 36 months. Time from registration to first eye screening was longer for people aged 18-34 years, and 20% were unscreened after 3 years. Delay in first screen was associated with increased risk of referable retinopathy. Although 95% of participants (n = 291 296) attended during the 15-month reporting period, uptake varied considerably. Younger age, social deprivation, ethnicity and duration of diabetes were independent predictors of non-attendance and referable retinopathy. Although the last 10 years has seen an overall reduction in vision impairment certification attributable to diabetic eye disease, the incidence of vision impairment in those aged <35 years was unchanged. CONCLUSIONS: Whilst the majority of participants are screened in a timely manner, there is considerable variation in uptake. Young adults, have sub-optimal attendance, and levels of vision impairment in this population have not changed over the last 10 years. There is an urgent need to explore barriers to/enablers of attendance in this group to inform policy initiatives and tailored interventions to address this issue
Recommended from our members
Diagnostic accuracy of technologies for glaucoma case-finding in a community setting
DESIGN: Cross-sectional, observational, community-based study.
PARTICIPANTS: A total of 505 subjects aged ≥60 years recruited from a community setting using no predefined exclusion criteria.
METHODS: Subjects underwent 4 index tests conducted by a technician unaware of subjects' ocular status. FDT and MMDT were used in suprathreshold mode. iVue OCT measured ganglion cell complex and retinal nerve fiber layer (RNFL) thickness. Reference standard was full ophthalmic examination by an experienced clinician who was masked to index test results. Subjects were classified as POAG (open drainage angle, glaucomatous optic neuropathy, and glaucomatous field defect), glaucoma suspect, ocular hypertension, or non-POAG/nonocular hypertension.
MAIN OUTCOME MEASURES: Test performance evaluated the individual as the unit of analysis. Diagnostic accuracy was assessed using predefined cutoffs for abnormality, generating sensitivity, specificity, and likelihood ratios. Continuous data were used to derive estimates of sensitivity at 90% specificity and partial area under the receiver operating characteristic curve (AUROC) plots from 90% to 100% specificity.
RESULTS: From the reference standard examination, 26 subjects (5.1%) had POAG and 32 subjects (6.4%) were glaucoma suspects. Sensitivity (95% confidence interval) at 90% specificity for detection of glaucoma suspect/POAG combined was 41% (28-55) for FDT, 35% (21-48) for MMDT, and 57% (44-70) for best-performing OCT parameter (inferior quadrant RNFL thickness); for POAG, sensitivity was 62% (39-84) for FDT, 58% (37-78) for MMDT, and 83% (68-98) for inferior quadrant RNFL thickness. Partial AUROC was significantly greater for inferior RNFL thickness than visual-function tests (P < 0.001). Post-test probability of glaucoma suspect/POAG combined and definite POAG increased substantially when best-performing criteria were combined for FDT or MMDT, iVue OCT, and ORA.
CONCLUSIONS: Diagnostic performance of individual tests gave acceptable accuracy for POAG detection. Low specificity of visual-function tests precludes their use in isolation, but case detection improves by combining RNFL thickness analysis with visual function tests
- …