292 research outputs found
SUPPORT Tools for evidence-informed health Policymaking (STP) 14: Organising and using policy dialogues to support evidence-informed policymaking
This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers
Challenges in Developing Evidence-Based Recommendations Using the GRADE Approach: The Case of Mental, Neurological, and Substance Use Disorders
Corrado Barbui and colleagues describe their use and adaptation of the GRADE approach in developing the guidelines for the WHO mental health Gap Action Programme (mhGAP)
Chapter 7: Grading a Body of Evidence on Diagnostic Tests
10.1007/s11606-012-2021-9Journal of General Internal Medicine27SUPPL.1S47-S55JGIM
Interpreting the results of patient reported outcome measures in clinical trials: The clinician's perspective
This article deals with the problem of interpreting health-related quality of life (HRQL) outcomes in clinical trials. First, we will briefly describe how dichotomization and item response theory can facilitate interpretation. Based on examples from the medical literature for the interpretation of HRQL scores we will show that dichotomies may help clinicians understand information provided by HRQL instruments in RCTs. They can choose thresholds to calculate proportions of patients benefiting based on absolute scores or change scores. For example, clinicians interpreting clinical trial results could consider the difference in the proportion of patients who achieve a mean score of 50 before and after an intervention on a scale from 1 to 100. For the change score approach, they could consider the proportion of patients who have changed by a score of 5 or more. Finally, they can calculate the proportion of patients benefiting and transform these numbers into a number needed to treat or natural frequencies. Second, we will describe in more detail an approach to the interpretation of HRQL scores based on the minimal important difference (MID) and proportions. The MID is the smallest difference in score in the outcome of interest that informed patients or informed proxies perceive as important, either beneficial or harmful, and that would lead the patient or clinician to consider a change in the management. Any change in management will depend on the downsides, including cost and inconvenience, associated with the intervention. Investigators can help with the interpretation of HRQL scores by determining the MID of an HRQL instrument and provide mean differences in relation to the MID. For instance, for an MID of 0.5 on a seven point scale investigators could provide the mean change on the instrument as well as the proportion of patients with scores greater than the MID. Thus, there are several steps investigators can take to facilitate this process to help bringing HRQL information closer to the bedside
From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions
Background: GRADE was developed to address shortcomings of tools to rate the quality of a body of evidence. While much has been published about GRADE, there are few empirical and systematic evaluations. Objective: To assess GRADE for systematic reviews (SRs) in terms of inter-rater agreement and identify areas of uncertainty. Design: Cross-sectional, descriptive study. Methods: We applied GRADE to three SRs (n = 48, 66, and 75 studies, respectively) with 29 comparisons and 12 outcomes overall. Two reviewers graded evidence independently for outcomes deemed clinically important a priori. Inter-rater reliability was assessed using kappas for four main domains (risk of bias, consistency, directness, and precision) and overall quality of evidence. Results: For the first review, reliability was: k = 0.41 for risk of bias; 0.84 consistency; 0.18 precision; and 0.44 overall quality. Kappa could not be calculated for directness as one rater assessed all items as direct; assessors agreed in 41 % of cases. For the second review reliability was: 0.37 consistency and 0.19 precision. Kappa could not be assessed for other items; assessors agreed in 33 % of cases for risk of bias; 100 % directness; and 58 % overall quality. For the third review, reliability was: 0.06 risk of bias; 0.79 consistency; 0.21 precision; and 0.18 overall quality. Assessors agreed in 100 % of cases for directness. Precision created the most uncertainty due to difficulties in identifying ‘‘optimal’ ’ information size and ‘‘clinica
Guidance for Evidence-Informed Policies about Health Systems: Linking Guidance Development to Policy Development
In the second paper in a three-part series on health systems guidance, John Lavis and colleagues explore the challenge of linking guidance development and policy development at global and national levels
Combining scores from different patient reported outcome measures in meta-analyses: when is it justified?
BACKGROUND: Combining outcomes and the use of standardized effect measures such as effect size and standardized response mean across instruments allows more comprehensive meta-analyses and should avoid selection bias. However, such analysis ideally requires that the instruments correlate strongly and that the underlying assumption of similar responsiveness is fulfilled. The aim of the study was to assess the correlation between two widely used health-related quality of life instruments for patients with chronic obstructive pulmonary disease and to compare the instruments' responsiveness on a study level. METHODS: We systematically identified all longitudinal studies that used both the Chronic Respiratory Questionnaire (CRQ) and the St. George's Respiratory Questionnaire (SGRQ) through electronic searches of MEDLINE, EMBASE, CENTRAL and PubMed. We assessed the correlation between CRQ (scale 1 – 7) and SGRQ (scale 1 – 100) change scores and compared responsiveness of the two instruments by comparing standardized response means (change scores divided by their standard deviation). RESULTS: We identified 15 studies with 23 patient groups. CRQ change scores ranged from -0.19 to 1.87 (median 0.35, IQR 0.14–0.68) and from -16.00 to 3.00 (median -3.00, IQR -4.73–0.25) for SGRQ change scores. The correlation between CRQ and SGRQ change scores was 0.88. Standardized response means of the CRQ (median 0.51, IQR 0.19–0.98) were significantly higher (p < 0.001) than for the SGRQ (median 0.26, IQR -0.03–0.40). CONCLUSION: Investigators should be cautious about pooling the results from different instruments in meta-analysis even if they appear to measure similar constructs. Despite high correlation in changes scores, responsiveness of instruments may differ substantially and could lead to important between-study heterogeneity and biased meta-analyses
Application of GRADE: Making evidence-based recommendations about diagnostic tests in clinical practice guidelines
<p>Abstract</p> <p>Background</p> <p>Accurate diagnosis is a fundamental aspect of appropriate healthcare. However, clinicians need guidance when implementing diagnostic tests given the number of tests available and resource constraints in healthcare. Practitioners of health often feel compelled to implement recommendations in guidelines, including recommendations about the use of diagnostic tests. However, the understanding about diagnostic tests by guideline panels and the methodology for developing recommendations is far from completely explored. Therefore, we evaluated the factors that guideline developers and users need to consider for the development of implementable recommendations about diagnostic tests.</p> <p>Methods</p> <p>Using a critical analysis of the process, we present the results of a case study using the Grading of Recommendations Applicability, Development and Evaluation (GRADE) approach to develop a clinical practice guideline for the diagnosis of Cow Milk Allergy with the World Allergy Organization.</p> <p>Results</p> <p>To ensure that guideline panels can develop informed recommendations about diagnostic tests, it appears that more emphasis needs to be placed on group processes, including question formulation, defining patient-important outcomes for diagnostic tests, and summarizing evidence. Explicit consideration of concepts of diagnosis from evidence-based medicine, such as pre-test probability and treatment threshold, is required to facilitate the work of a guideline panel and to formulate implementable recommendations.</p> <p>Discussion</p> <p>This case study provides useful guidance for guideline developers and clinicians about what they ought to demand from clinical practice guidelines to facilitate implementation and strengthen confidence in recommendations about diagnostic tests. Applying a structured framework like the GRADE approach with its requirement for transparency in the description of the evidence and factors that influence recommendations facilitates laying out the process and decision factors that are required for the development, interpretation, and implementation of recommendations about diagnostic tests.</p
How Can We Support the Use of Systematic Reviews in Policymaking?
John Lavis discusses how health policymakers and their stakeholders need research evidence, and the best ways evidence can be synthesized and packaged to optimize its use
- …