13,054 research outputs found

    FIRCLA, one-loop correction to e+ e- to nu anti-nu H and basis of Feynman integrals in higher dimensions

    Full text link
    An approach for an effective computer evaluation of one-loop multi-leg diagrams is proposed. It's main feature is the combined use of several systems - DIANA, FORM and MAPLE. As an application we consider the one-loop correction to Higgs production in e+ e- to nu anti-nu H, which is important for future e+ e- colliders. To improve the stability of numerical evaluations a non-standard basis of integrals is introduced by transforming integrals to higher dimensions.Comment: 6 pages 1 figure, reference to G. Belanger et al. adde

    Glaucoma Home Monitoring Using a Tablet- Based Visual Field Test (Eyecatcher): An Assessment of Accuracy and Adherence Over 6 Months

    Get PDF
    PURPOSE: To assess accuracy and adherence of visual field (VF) home monitoring in a pilot sample of patients with glaucoma. DESIGN: Prospective longitudinal feasibility and reliability study. METHODS: Twenty adults (median 71 years) with an established diagnosis of glaucoma were issued a tablet perimeter (Eyecatcher) and were asked to perform 1 VF home assessment per eye, per month, for 6 months (12 tests total). Before and after home monitoring, 2 VF assessments were performed in clinic using standard automated perimetry (4 tests total, per eye). RESULTS: All 20 participants could perform monthly home monitoring, though 1 participant stopped after 4 months (adherence: 98% of tests). There was good concordance between VFs measured at home and in the clinic (r = 0.94, P < .001). In 21 of 236 tests (9%), mean deviation deviated by more than ±3 dB from the median. Many of these anomalous tests could be identified by applying machine learning techniques to recordings from the tablets' front-facing camera (area under the receiver operating characteristic curve = 0.78). Adding home-monitoring data to 2 standard automated perimetry tests made 6 months apart reduced measurement error (between-test measurement variability) in 97% of eyes, with mean absolute error more than halving in 90% of eyes. Median test duration was 4.5 minutes (quartiles: 3.9-5.2 minutes). Substantial variations in ambient illumination had no observable effect on VF measurements (r = 0.07, P = .320). CONCLUSIONS: Home monitoring of VFs is viable for some patients and may provide clinically useful data

    Acceptability of a home-based visual field test (Eyecatcher) for glaucoma home monitoring: a qualitative study of patients' views and experiences

    Get PDF
    Objectives To explore the acceptability of home visual field (VF) testing using Eyecatcher among people with glaucoma participating in a 6-month home monitoring pilot study. Design Qualitative study using face-to-face semistructured interviews. Transcripts were analysed using thematic analysis. Setting Participants were recruited in the UK through an advertisement in the International Glaucoma Association (now Glaucoma UK) newsletter. Participants Twenty adults (10 women; median age: 71 years) with a diagnosis of glaucoma were recruited (including open angle and normal tension glaucoma; mean deviation=2.5 to -29.9 dB). Results All participants could successfully perform VF testing at home. Interview data were coded into four overarching themes regarding experiences of undertaking VF home monitoring and attitudes towards its wider implementation in healthcare: (1) comparisons between Eyecatcher and Humphrey Field Analyser (HFA); (2) capability using Eyecatcher; (3) practicalities for effective wider scale implementation; (4) motivations for home monitoring. Conclusions Participants identified a broad range of benefits to VF home monitoring and discussed areas for service improvement. Eyecatcher was compared positively with conventional VF testing using HFA. Home monitoring may be acceptable to at least a subset of people with glaucoma

    The Human Touch:Using a Webcam to Autonomously Monitor Compliance During Visual Field Assessments

    Get PDF
    Purpose: To explore the feasibility of using various easy-to-obtain biomarkers to monitor non-compliance (measurement error) during visual field assessments. Methods: Forty-two healthy adults (42 eyes) and seven glaucoma patients (14 eyes) underwent two same-day visual field assessments. An ordinary webcam was used to compute seven potential biomarkers of task compliance, based primarily on eye gaze, head pose, and facial expression. We quantified the association between each biomarker and measurement error, as defined by (1) test-retest differences in overall test scores (mean sensitivity), and (2) failures to respond to visible stimuli on individual trials (stimuli -3 dB or more brighter than threshold). Results: In healthy eyes, three of the seven biomarkers were significantly associated with overall (test-retest) measurement error (P = 0.003-0.007), and at least two others exhibited possible trends (P = 0.052-0.060). The weighted linear sum of all seven biomarkers was associated with overall measurement error, in both healthy eyes (r = 0.51, P <0.001) and patients (r = 0.65, P <0.001). Five biomarkers were each associated with failures to respond to visible stimuli on individual trials (all P <0.001). Conclusions: Inexpensive, autonomous measures of task compliance are associated with measurement error in visual field assessments, in terms of both the overall reliability of a test and failures to respond on particular trials ("lapses"). This could be helpful for identifying low-quality assessments and for improving assessment techniques (e.g., by discounting suspect responses or by automatically triggering comfort breaks or encouragement). Translational Relevance: This study explores a potential way of improving the reliability of visual field assessments, a crucial but notoriously unreliable clinical measure

    From ‘other’ to involved: User involvement in research: An emerging paradigm

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund. Copyright @ 2013 The Author(s). This is an Open Access article. Non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly attributed, cited, and is not altered, transformed, or built upon in any way, is permitted. The moral rights of the named author(s) have been asserted.This article explores the issue of ‘othering’ service users and the role that involving them, particularly in social policy and social work research may play in reducing this. It takes, as its starting point, the concept of ‘social exclusion’, which has developed in Europe and the marginal role that those who have been included in this construct have played in its development and the damaging effects this may have. The article explores service user involvement in research and is itself written from a service user perspective. It pays particular attention to the ideological, practical, theoretical, ethical and methodological issues that such user involvement may raise for research. It examines problems that both research and user involvement may give rise to and also considers developments internationally to involve service users/subjects of research, highlighting some of the possible implications and gains of engaging service user knowledge in research and the need for this to be evaluated

    Defining the cognitive phenotype of autism

    Get PDF
    Although much progress has been made in determining the cognitive profile of strengths and weaknesses that characterise individuals with autism spectrum disorders (ASDs), there remain a number of outstanding questions. These include how universal strengths and deficits are; whether cognitive subgroups exist; and how cognition is associated with core autistic behaviours, as well as associated psychopathology. Several methodological factors have contributed to these limitations in our knowledge, including: small sample sizes, a focus on single domains of cognition, and an absence of comprehensive behavioural phenotypic information. To attempt to overcome some of these limitations, we assessed a wide range of cognitive domains in a large sample (N = 100) of 14- to 16-year-old adolescents with ASDs who had been rigorously behaviourally characterised. In this review, we will use examples of some initial findings in the domains of perceptual processing, emotion processing and memory, both to outline different approaches we have taken to data analysis and to highlight the considerable challenges to better defining the cognitive phenotype(s) of ASDs. Enhanced knowledge of the cognitive phenotype may contribute to our understanding of the complex links between genes, brain and behaviour, as well as inform approaches to remediation

    Design of a speed meter interferometer proof-of-principle experiment

    Get PDF
    The second generation of large scale interferometric gravitational wave detectors will be limited by quantum noise over a wide frequency range in their detection band. Further sensitivity improvements for future upgrades or new detectors beyond the second generation motivate the development of measurement schemes to mitigate the impact of quantum noise in these instruments. Two strands of development are being pursued to reach this goal, focusing both on modifications of the well-established Michelson detector configuration and development of different detector topologies. In this paper, we present the design of the world's first Sagnac speed meter interferometer which is currently being constructed at the University of Glasgow. With this proof-of-principle experiment we aim to demonstrate the theoretically predicted lower quantum noise in a Sagnac interferometer compared to an equivalent Michelson interferometer, to qualify Sagnac speed meters for further research towards an implementation in a future generation large scale gravitational wave detector, such as the planned Einstein Telescope observatory.Comment: Revised version: 16 pages, 6 figure

    Process evaluation for complex interventions in primary care: understanding trials using the normalization process model

    Get PDF
    Background: the Normalization Process Model is a conceptual tool intended to assist in understanding the factors that affect implementation processes in clinical trials and other evaluations of complex interventions. It focuses on the ways that the implementation of complex interventions is shaped by problems of workability and integration.Method: in this paper the model is applied to two different complex trials: (i) the delivery of problem solving therapies for psychosocial distress, and (ii) the delivery of nurse-led clinics for heart failure treatment in primary care.Results: application of the model shows how process evaluations need to focus on more than the immediate contexts in which trial outcomes are generated. Problems relating to intervention workability and integration also need to be understood. The model may be used effectively to explain the implementation process in trials of complex interventions.Conclusion: the model invites evaluators to attend equally to considering how a complex intervention interacts with existing patterns of service organization, professional practice, and professional-patient interaction. The justification for this may be found in the abundance of reports of clinical effectiveness for interventions that have little hope of being implemented in real healthcare setting
    corecore