268,563 research outputs found

    Improving the use of research evidence in guideline development: 8. Synthesis and presentation of evidence

    Get PDF
    BACKGROUND: The World Health Organization (WHO), like many other organisations around the world, has recognised the need to use more rigorous processes to ensure that health care recommendations are informed by the best available research evidence. This is the eighth of a series of 16 reviews that have been prepared as background for advice from the WHO Advisory Committee on Health Research to WHO on how to achieve this. OBJECTIVES: We reviewed the literature on the synthesis and presentation of research evidence, focusing on four key questions. METHODS: We searched PubMed and three databases of methodological studies for existing systematic reviews and relevant methodological research. We did not conduct systematic reviews ourselves. Our conclusions are based on the available evidence, consideration of what WHO and other organisations are doing and logical arguments. KEY QUESTIONS AND ANSWERS: We found two reviews of instruments for critically appraising systematic reviews, several studies of the importance of using extensive searches for reviews and determining when it is important to update reviews, and consensus statements about the reporting of reviews that informed our answers to the following questions. How should existing systematic reviews be critically appraised? • Because preparing systematic reviews can take over a year and require capacity and resources, existing reviews should be used when possible and updated, if needed. • Standard criteria, such as A MeaSurement Tool to Assess Reviews (AMSTAR), should be used to critically appraise existing systematic reviews, together with an assessment of the relevance of the review to the questions being asked. When and how should WHO undertake or commission new reviews? • Consideration should be given to undertaking or commissioning a new review whenever a relevant, up-to-date review of good quality is not available. • When time or resources are limited it may be necessary to undertake rapid assessments. The methods that are used to do these assessments should be reported, including important limitations and uncertainties and explicit consideration of the need and urgency of undertaking a full systematic review. • Because WHO has limited capacity for undertaking systematic reviews, reviews will often need to be commissioned when a new review is needed. Consideration should be given to establishing collaborating centres to undertake or support this work, similar to what some national organisations have done. How should the findings of systematic reviews be summarised and presented to committees responsible for making recommendations? • Concise summaries (evidence tables) of the best available evidence for each important outcome, including benefits, harms and costs, should be presented to the groups responsible for making recommendations. These should include an assessment of the quality of the evidence and a summary of the findings for each outcome. • The full systematic reviews, on which the summaries are based, should also be available to both those making recommendations and users of the recommendations. What additional information is needed to inform recommendations and how should this information be synthesised with information about effects and presented to committees? • Additional information that is needed to inform recommendations includes factors that might modify the expected effects, need (prevalence, baseline risk or status), values (the relative importance of key outcomes), costs and the availability of resources. • Any assumptions that are made about values or other factors that may vary from setting to setting should be made explicit. • For global guidelines that are intended to inform decisions in different settings, consideration should be given to using a template to assist the synthesis of information specific to a setting with the global evidence of the effects of the relevant interventions

    Netiquette: ethic, education and behavior on Internet. A systematic literature review

    Get PDF
    In this article, an analysis of the existing literature is carried out. It focused on the netiquette (country, date, objectives, methodological design, main variables, sample details, and measurement methods) included in the Web of Science and Scopus databases. This systematic review of the literature has been developed entirely according to the Preferred Reporting Items for Systematic Reviews (PRISMA). The initial search yielded 53 results, of which 18 exceeded the inclusion criteria and were analyzed in detail. These results show that this is a poorly defined line of research, both in theory and in practice. There is a need to update the theoretical framework and an analysis of the empirical proposals, whose samples are supported by students or similar. Knowing, understanding, and analyzing netiquette is a necessity in a society in which information and communication technologies (ICT) have changed the way of socializing and communicating. A new reality in which there is cyber-bullying, digital scams, fake news, and haters on social networks

    A Knowledge Graph-Based Method for Automating Systematic Literature Reviews

    Full text link
    Systematic Literature Reviews aim at investigating current approaches to conclude a research gap or determine a futuristic approach. They represent a significant part of a research activity, from which new concepts stem. However, with the massive availability of publications at a rapid growing rate, especially digitally, it becomes challenging to efficiently screen and assess relevant publications. Another challenge is the continuous assessment of related work over a long period of time and the consequent need for a continuous update, which can be a time-consuming task. Knowledge graphs model entities in a connected manner and enable new insights using different reasoning and analysis methods. The objective of this work is to present an approach to partially automate the conduction of a Systematic Literature Review as well as classify and visualize the results as a knowledge graph. The designed software prototype was used for the conduction of a review on context-awareness in automation systems with considerably accurate results compared to a manual conduction.Comment: 9 pages, 7 figures, 2 table

    Stepped wedge randomised controlled trials: systematic review of studies published between 2010 and 2014.

    Get PDF
    BACKGROUND: In a stepped wedge, cluster randomised trial, clusters receive the intervention at different time points, and the order in which they received it is randomised. Previous systematic reviews of stepped wedge trials have documented a steady rise in their use between 1987 and 2010, which was attributed to the design's perceived logistical and analytical advantages. However, the interventions included in these systematic reviews were often poorly reported and did not adequately describe the analysis and/or methodology used. Since 2010, a number of additional stepped wedge trials have been published. This article aims to update previous systematic reviews, and consider what interventions were tested and the rationale given for using a stepped wedge design. METHODS: We searched PubMed, PsychINFO, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Web of Science, the Cochrane Library and the Current Controlled Trials Register for articles published between January 2010 and May 2014. We considered stepped wedge randomised controlled trials in all fields of research. We independently extracted data from retrieved articles and reviewed them. Interventions were then coded using the functions specified by the Behaviour Change Wheel, and for behaviour change techniques using a validated taxonomy. RESULTS: Our review identified 37 stepped wedge trials, reported in 10 articles presenting trial results, one conference abstract, 21 protocol or study design articles and five trial registrations. These were mostly conducted in developed countries (n = 30), and within healthcare organisations (n = 28). A total of 33 of the interventions were educationally based, with the most commonly used behaviour change techniques being 'instruction on how to perform a behaviour' (n = 32) and 'persuasive source' (n = 25). Authors gave a wide range of reasons for the use of the stepped wedge trial design, including ethical considerations, logistical, financial and methodological. The adequacy of reporting varied across studies: many did not provide sufficient detail regarding the methodology or calculation of the required sample size. CONCLUSIONS: The popularity of stepped wedge trials has increased since 2010, predominantly in high-income countries. However, there is a need for further guidance on their reporting and analysis

    A surveillance system to assess the need for updating systematic reviews.

    Get PDF
    BackgroundSystematic reviews (SRs) can become outdated as new evidence emerges over time. Organizations that produce SRs need a surveillance method to determine when reviews are likely to require updating. This report describes the development and initial results of a surveillance system to assess SRs produced by the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) Program.MethodsTwenty-four SRs were assessed using existing methods that incorporate limited literature searches, expert opinion, and quantitative methods for the presence of signals triggering the need for updating. The system was designed to begin surveillance six months after the release of the original review, and then ceforth every six months for any review not classified as being a high priority for updating. The outcome of each round of surveillance was a classification of the SR as being low, medium or high priority for updating.ResultsTwenty-four SRs underwent surveillance at least once, and ten underwent surveillance a second time during the 18 months of the program. Two SRs were classified as high, five as medium, and 17 as low priority for updating. The time lapse between the searches conducted for the original reports and the updated searches (search time lapse - STL) ranged from 11 months to 62 months: The STL for the high priority reports were 29 months and 54 months; those for medium priority reports ranged from 19 to 62 months; and those for low priority reports ranged from 11 to 33 months. Neither the STL nor the number of new relevant articles was perfectly associated with a signal for updating. Challenges of implementing the surveillance system included determining what constituted the actual conclusions of an SR that required assessing; and sometimes poor response rates of experts.ConclusionIn this system of regular surveillance of 24 systematic reviews on a variety of clinical interventions produced by a leading organization, about 70% of reviews were determined to have a low priority for updating. Evidence suggests that the time period for surveillance is yearly rather than the six months used in this project

    Assessment of a method to detect signals for updating systematic reviews.

    Get PDF
    BackgroundSystematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed.MethodsThe AHRQ Comparative Effectiveness Review program had produced 13 comparative effectiveness reviews (CERs), a subcategory of systematic reviews, by 2009, 11 of which were assessed in 2009 using a surveillance system to determine the degree to which individual conclusions were out of date and to assign a priority for updating each report. Four CERs were judged to be a high priority for updating, four CERs were judged to be medium priority for updating, and three CERs were judged to be low priority for updating. AHRQ then commissioned full update reviews for 9 of these 11 CERs. Where possible, we matched the original conclusions with their corresponding conclusions in the update reports, and compared the congruence between these pairs with our original predictions about which conclusions in each CER remained valid. We then classified the concordance of each pair as good, fair, or poor. We also made a summary determination of the priority for updating each CER based on the actual changes in conclusions in the updated report, and compared these determinations with the earlier assessments of priority.ResultsThe 9 CERs included 149 individual conclusions, 84% with matches in the update reports. Across reports, 83% of matched conclusions had good concordance, and 99% had good or fair concordance. The one instance of poor concordance was partially attributable to the publication of new evidence after the surveillance signal searches had been done. Both CERs originally judged as being low priority for updating had no substantive changes to their conclusions in the actual updated report. The agreement on overall priority for updating between prediction and actual changes to conclusions was Kappa = 0.74.ConclusionsThese results provide some support for the validity of a surveillance system for detecting signals indicating when a systematic review needs updating

    Best practice in undertaking and reporting health technology assessments : Working Group 4 report

    Get PDF
    [Executive Summary] The aim of Working Group 4 has been to develop and disseminate best practice in undertaking and reporting assessments, and to identify needs for methodologic development. Health technology assessment (HTA) is a multidisciplinary activity that systematically examines the technical performance, safety, clinical efficacy, and effectiveness, cost, costeffectiveness, organizational implications, social consequences, legal, and ethical considerations of the application of a health technology (18). HTA activity has been continuously increasing over the last few years. Numerous HTA agencies and other institutions (termed in this report “HTA doers”) across Europe are producing an important and growing amount of HTA information. The objectives of HTA vary considerably between HTA agencies and other actors, from a strictly political decision making–oriented approach regarding advice on market licensure, coverage in benefits catalogue, or investment planning to information directed to providers or to the public. Although there seems to be broad agreement on the general elements that belong to the HTA process, and although HTA doers in Europe use similar principles (41), this is often difficult to see because of differences in language and terminology. In addition, the reporting of the findings from the assessments differs considerably. This reduces comparability and makes it difficult for those undertaking HTA assessments to integrate previous findings from other HTA doers in a subsequent evaluation of the same technology. Transparent and clear reporting is an important step toward disseminating the findings of a HTA; thus, standards that ensure high quality reporting may contribute to a wider dissemination of results. The EUR-ASSESS methodologic subgroup already proposed a framework for conducting and reporting HTA (18), which served as the basis for the current working group. New developments in the last 5 years necessitate revisiting that framework and providing a solid structure for future updates. Giving due attention to these methodologic developments, this report describes the current “best practice” in both undertaking and reporting HTA and identifies the needs for methodologic development. It concludes with specific recommendations and tools for implementing them, e.g., by providing the structure for English-language scientific summary reports and a checklist to assess the methodologic and reporting quality of HTA reports
    • …
    corecore