4 research outputs found

    Effectiveness of assistive technology in improving the safety of people with dementia: a systematic review and meta-analysis.

    Get PDF
    Objectives: Assistive technology (AT) may enable people with dementia to live safely at home for longer, preventing care home admission. This systematic review assesses the effectiveness of AT in improving the safety of people with dementia living in the domestic setting, by searching for randomised controlled trials, non-randomised controlled trials and controlled before-after studies which compared safety AT with treatment as usual. Measures of safety include care home admission; risky behaviours, accidents and falls at home; and numbers of deaths. The review updates the safety aspect of Fleming and Sum's 2014 systematic review. Method: Seven bibliographic databases, the Social Care Institute for Excellence website and the Alzheimer's Society website were searched for published and unpublished literature between 2011-2016. Search terms related to AT, dementia and older people. Common outcomes were meta-analysed. Results: Three randomised controlled trials were identified, including 245 people with dementia. No significant differences were found between intervention and control groups in care home admission (risk ratio 0.85 95% CI [0.37, 1.97]; Z = 0.37; p = 0.71). The probability of a fall occurring was 50% lower in the intervention group (risk ratio 0.50 95% CI [0.32, 0.78]; Z = 3.03; p = 0.002). One included study found that a home safety package containing AT significantly reduced risky behaviour and accidents (F(45) = 4.504, p < 0.001). Limitations include the few studies found and the inclusion of studies in English only. Conclusion: AT's effectiveness in decreasing care home admission is inconclusive. However, the AT items and packages tested improved safety through reducing falls risk, accidents and other risky behaviour

    Evaluating early intervention programmes: six common pitfalls, and how to avoid them.

    Get PDF
    This handy guide provides guidance on addressing six of the most common issues we see in our assessments of programme evaluations, including explanations of how these problems undermine confidence in a study’s findings, how they can be avoided or rectified, case studies and a list of useful resources in each case. Whether you are involved in commissioning, planning or delivering evaluations of early intervention, these are the issues to understand and watch out for. Why does avoiding these common pitfalls in evaluation matter? High-quality evidence on ‘what works’ plays an essential part in improving the design and delivery of public services, and ultimately outcomes for the people who use those services. Early intervention is no different: early intervention programmes should be commissioned, managed and delivered to produce the best possible results for children and young people at risk of developing long-term problems. EIF has conducted over 100 in-depth assessments of the evidence for the effectiveness of programmes designed to improve outcomes for children. These programme assessments consider not only the findings of the evidence – whether the evidence suggests that a programme is effective or not – but also the quality of that evidence. Studies investigating the impact of programmes vary in the extent to which they are robust and have been well planned and properly carried out. Less robust and well-conducted studies are prone to produce biased results, meaning that they may overstate the effectiveness of a programme. In the worst case, less robust studies may mislead us into concluding a programme is effective when it is not effective at all. Therefore, to understand what the evidence tells us about a programme’s effectiveness, it is also essential to consider the quality of the process by which that evidence has been generated
    corecore