7,248 research outputs found

    RESPOND – A patient-centred programme to prevent secondary falls in older people presenting to the emergency department with a fall: Protocol for a mixed methods programme evaluation.

    Get PDF
    Background Programme evaluations conducted alongside randomised controlled trials (RCTs) have potential to enhance understanding of trial outcomes. This paper describes a multi-level programme evaluation to be conducted alongside an RCT of a falls prevention programme (RESPOND). Objectives 1) To conduct a process evaluation in order to identify the degree of implementation fidelity and associated barriers and facilitators. 2) To evaluate the primary intended impact of the programme: participation in fall prevention strategies, and the factors influencing participation. 3) To identify the factors influencing RESPOND RCT outcomes: falls, fall injuries and ED re-presentations. Methods/ Design Five hundred and twenty eight community-dwelling adults aged 60–90 years presenting to two EDs with a fall will be recruited and randomly assigned to the intervention or standard care group. All RESPOND participants and RESPOND clinicians will be included in the evaluation. A mixed methods design will be used and a programme logic model will frame the evaluation. Data will be sourced from interviews, focus groups, questionnaires, clinician case notes, recruitment records, participant-completed calendars, hospital administrative datasets, and audio-recordings of intervention contacts. Quantitative data will be analysed via descriptive and inferential statistics and qualitative data will be interpreted using thematic analysis. Discussion The RESPOND programme evaluation will provide information about contextual and influencing factors related to the RCT outcomes. The results will assist researchers, clinicians, and policy makers to make decisions about future falls prevention interventions. Insights gained are likely to be transferable to preventive health programmes for a range of chronic conditions

    Every student counts: promoting numeracy and enhancing employability

    Get PDF
    This three-year project investigated factors that influence the development of undergraduates’ numeracy skills, with a view to identifying ways to improve them and thereby enhance student employability. Its aims and objectives were to ascertain: the generic numeracy skills in which employers expect their graduate recruits to be competent and the extent to which employers are using numeracy tests as part of graduate recruitment processes; the numeracy skills developed within a diversity of academic disciplines; the prevalence of factors that influence undergraduates’ development of their numeracy skills; how the development of numeracy skills might be better supported within undergraduate curricula; and the extra-curricular support necessary to enhance undergraduates’ numeracy skills

    Mitigating Turnover with Code Review Recommendation: Balancing Expertise, Workload, and Knowledge Distribution

    Get PDF
    Developer turnover is inevitable on software projects and leads to knowledge loss, a reduction in productivity, and an increase in defects. Mitigation strategies to deal with turnover tend to disrupt and increase workloads for developers. In this work, we suggest that through code review recommendation we can distribute knowledge and mitigate turnover with minimal impact on the development process. We evaluate review recommenders in the context of ensuring expertise during review, Expertise, reducing the review workload of the core team, CoreWorkload, and reducing the Files at Risk to turnover, FaR. We find that prior work that assigns reviewers based on file ownership concentrates knowledge on a small group of core developers increasing risk of knowledge loss from turnover by up to 65%. We propose learning and retention aware review recommenders that when combined are effective at reducing the risk of turnover by -29% but they unacceptably reduce the overall expertise during reviews by -26%. We develop the Sophia recommender that suggest experts when none of the files under review are hoarded by developers but distributes knowledge when files are at risk. In this way, we are able to simultaneously increase expertise during review with a ΔExpertise of 6%, with a negligible impact on workload of ΔCoreWorkload of 0.09%, and reduce the files at risk by ΔFaR -28%. Sophia is integrated into GitHub pull requests allowing developers to select an appropriate expert or “learner” based on the context of the review. We release the Sophia bot as well as the code and data for replication purposes
    • 

    corecore