37,581 research outputs found

    A surveillance system to assess the need for updating systematic reviews.

    Get PDF
    BackgroundSystematic reviews (SRs) can become outdated as new evidence emerges over time. Organizations that produce SRs need a surveillance method to determine when reviews are likely to require updating. This report describes the development and initial results of a surveillance system to assess SRs produced by the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) Program.MethodsTwenty-four SRs were assessed using existing methods that incorporate limited literature searches, expert opinion, and quantitative methods for the presence of signals triggering the need for updating. The system was designed to begin surveillance six months after the release of the original review, and then ceforth every six months for any review not classified as being a high priority for updating. The outcome of each round of surveillance was a classification of the SR as being low, medium or high priority for updating.ResultsTwenty-four SRs underwent surveillance at least once, and ten underwent surveillance a second time during the 18 months of the program. Two SRs were classified as high, five as medium, and 17 as low priority for updating. The time lapse between the searches conducted for the original reports and the updated searches (search time lapse - STL) ranged from 11 months to 62 months: The STL for the high priority reports were 29 months and 54 months; those for medium priority reports ranged from 19 to 62 months; and those for low priority reports ranged from 11 to 33 months. Neither the STL nor the number of new relevant articles was perfectly associated with a signal for updating. Challenges of implementing the surveillance system included determining what constituted the actual conclusions of an SR that required assessing; and sometimes poor response rates of experts.ConclusionIn this system of regular surveillance of 24 systematic reviews on a variety of clinical interventions produced by a leading organization, about 70% of reviews were determined to have a low priority for updating. Evidence suggests that the time period for surveillance is yearly rather than the six months used in this project

    Assessment of a method to detect signals for updating systematic reviews.

    Get PDF
    BackgroundSystematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed.MethodsThe AHRQ Comparative Effectiveness Review program had produced 13 comparative effectiveness reviews (CERs), a subcategory of systematic reviews, by 2009, 11 of which were assessed in 2009 using a surveillance system to determine the degree to which individual conclusions were out of date and to assign a priority for updating each report. Four CERs were judged to be a high priority for updating, four CERs were judged to be medium priority for updating, and three CERs were judged to be low priority for updating. AHRQ then commissioned full update reviews for 9 of these 11 CERs. Where possible, we matched the original conclusions with their corresponding conclusions in the update reports, and compared the congruence between these pairs with our original predictions about which conclusions in each CER remained valid. We then classified the concordance of each pair as good, fair, or poor. We also made a summary determination of the priority for updating each CER based on the actual changes in conclusions in the updated report, and compared these determinations with the earlier assessments of priority.ResultsThe 9 CERs included 149 individual conclusions, 84% with matches in the update reports. Across reports, 83% of matched conclusions had good concordance, and 99% had good or fair concordance. The one instance of poor concordance was partially attributable to the publication of new evidence after the surveillance signal searches had been done. Both CERs originally judged as being low priority for updating had no substantive changes to their conclusions in the actual updated report. The agreement on overall priority for updating between prediction and actual changes to conclusions was Kappa = 0.74.ConclusionsThese results provide some support for the validity of a surveillance system for detecting signals indicating when a systematic review needs updating

    Best practice in undertaking and reporting health technology assessments : Working Group 4 report

    Get PDF
    [Executive Summary] The aim of Working Group 4 has been to develop and disseminate best practice in undertaking and reporting assessments, and to identify needs for methodologic development. Health technology assessment (HTA) is a multidisciplinary activity that systematically examines the technical performance, safety, clinical efficacy, and effectiveness, cost, costeffectiveness, organizational implications, social consequences, legal, and ethical considerations of the application of a health technology (18). HTA activity has been continuously increasing over the last few years. Numerous HTA agencies and other institutions (termed in this report “HTA doers”) across Europe are producing an important and growing amount of HTA information. The objectives of HTA vary considerably between HTA agencies and other actors, from a strictly political decision making–oriented approach regarding advice on market licensure, coverage in benefits catalogue, or investment planning to information directed to providers or to the public. Although there seems to be broad agreement on the general elements that belong to the HTA process, and although HTA doers in Europe use similar principles (41), this is often difficult to see because of differences in language and terminology. In addition, the reporting of the findings from the assessments differs considerably. This reduces comparability and makes it difficult for those undertaking HTA assessments to integrate previous findings from other HTA doers in a subsequent evaluation of the same technology. Transparent and clear reporting is an important step toward disseminating the findings of a HTA; thus, standards that ensure high quality reporting may contribute to a wider dissemination of results. The EUR-ASSESS methodologic subgroup already proposed a framework for conducting and reporting HTA (18), which served as the basis for the current working group. New developments in the last 5 years necessitate revisiting that framework and providing a solid structure for future updates. Giving due attention to these methodologic developments, this report describes the current “best practice” in both undertaking and reporting HTA and identifies the needs for methodologic development. It concludes with specific recommendations and tools for implementing them, e.g., by providing the structure for English-language scientific summary reports and a checklist to assess the methodologic and reporting quality of HTA reports

    Developing and evaluating complex interventions: the new Medical Research Council guidance

    Get PDF
    <p><i>Evaluating complex interventions is complicated. The Medical Research Council's evaluation framework (2000) brought welcome clarity to the task. Now the council has updated its guidance</i></p> <p>Complex interventions are widely used in the health service, in public health practice, and in areas of social policy that have important health consequences, such as education, transport, and housing. They present various problems for evaluators, in addition to the practical and methodological difficulties that any successful evaluation must overcome. In 2000, the Medical Research Council (MRC) published a framework<sup>1</sup> to help researchers and research funders to recognise and adopt appropriate methods. The framework has been highly influential, and the accompanying BMJ paper is widely cited.<sup>2</sup> However, much valuable experience has since accumulated of both conventional and more innovative methods. This has now been incorporated in comprehensively revised and updated guidance recently released by the MRC (<a href="www.mrc.ac.uk/complexinterventionsguidance">www.mrc.ac.uk/complexinterventionsguidance</a>). In this article we summarise the issues that prompted the revision and the key messages of the new guidance. </p&gt

    Ensuring Linguistic Access in Health Care Settings: An Overview of Current Legal Rights and Responsibilities

    Get PDF
    Focuses on the language access responsibilities of healthcare and coverage providers pursuant to federal and state laws and policies

    Community- and hospital-based nurses' implementation of evidence-based practice: are there any differences?

    Get PDF
    The aim of this paper is to discuss the impact of nurses’ beliefs, knowledge and skills on the implementation of evidence-based practice (EBP) in hospital and community settings. EBP refers to the implementation of the most up-to-date robust research into clinical practice. Barriers have been well-documented and traditionally include negative beliefs of nurses as well as a lack of time, knowledge and skills. However, with degree entry nursing and a focus on community health care provision, what has changed? A comprehensive search of contemporary literature (2010-2015) was completed. The findings of this review show that the traditionally acknowledged barriers of a lack of time, knowledge and skills remained, however, nurses’ beliefs towards EBP however were more positive, but positive beliefs did not affect the intentions to implement EBP or knowledge and skills of EBP. Nurses in hospital and community settings reported similar barriers and facilitators
    corecore