65 research outputs found
Distribution of power available to detect a relative risk reduction of 30%, across 77,237 studies.
<p>Distribution of power available to detect a relative risk reduction of 30%, across 77,237 studies.</p
Numbers of adequately powered studies (≥50% power) and median power within each meta-analysis (MA) with respect to a 30% relative risk reduction (<i>RRR30</i>), overall and by medical specialty, outcome type and intervention-comparison type.
1<p>Other medical specialties: Blood and immune system, Ear and nose, Eye, General health, Genetic disorders, Injuries, accidents and wounds, Mouth and dental, Skin.</p>2<p>Other semi-objective outcomes: External structure, Internal structure, Surgical/device related success/failure, Withdrawals/drop-outs.</p>3<p>Other subjective outcomes: Pain, Mental health outcomes, Quality of life/functioning, Consumption, Satisfaction with care, Composite (at least 1 non-mortality/morbidity).</p>4<p>Non-pharmacological interventions include interventions classified as medical devices, surgical, complex, resources and infrastructure, behavioural, psychological, physical, complementary, educational, radiotherapy, vaccines, cellular and gene, screening.</p
Average differences in observed log odds ratios between underpowered () compared to adequately powered studies, in subset A of 1,107 meta-analyses, overall and within medical specialties, outcome types and intervention-comparison types.
1<p>Other medical specialties, semi-objective outcomes, subjective outcomes and non-pharmacological interventions defined in footnotes to <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0059202#pone-0059202-t002" target="_blank">Table 2</a>.</p>2<p>Comparison is less meaningful when comparing two active interventions since the a priori “better” active intervention is not taken into account.</p
The Impact of Study Size on Meta-analyses: Examination of Underpowered Studies in Cochrane Reviews
<div><p>Background</p><p>Most meta-analyses include data from one or more small studies that, individually, do not have power to detect an intervention effect. The relative influence of adequately powered and underpowered studies in published meta-analyses has not previously been explored. We examine the distribution of power available in studies within meta-analyses published in Cochrane reviews, and investigate the impact of underpowered studies on meta-analysis results.</p> <p>Methods and Findings</p><p>For 14,886 meta-analyses of binary outcomes from 1,991 Cochrane reviews, we calculated power per study within each meta-analysis. We defined adequate power as ≥50% power to detect a 30% relative risk reduction. In a subset of 1,107 meta-analyses including 5 or more studies with at least two adequately powered and at least one underpowered, results were compared with and without underpowered studies. In 10,492 (70%) of 14,886 meta-analyses, all included studies were underpowered; only 2,588 (17%) included at least two adequately powered studies. 34% of the meta-analyses themselves were adequately powered. The median of summary relative risks was 0.75 across all meta-analyses (inter-quartile range 0.55 to 0.89). In the subset examined, odds ratios in underpowered studies were 15% lower (95% CI 11% to 18%, P<0.0001) than in adequately powered studies, in meta-analyses of controlled pharmacological trials; and 12% lower (95% CI 7% to 17%, P<0.0001) in meta-analyses of controlled non-pharmacological trials. The standard error of the intervention effect increased by a median of 11% (inter-quartile range −1% to 35%) when underpowered studies were omitted; and between-study heterogeneity tended to decrease.</p> <p>Conclusions</p><p>When at least two adequately powered studies are available in meta-analyses reported by Cochrane reviews, underpowered studies often contribute little information, and could be left out if a rapid review of the evidence is required. However, underpowered studies made up the entirety of the evidence in most Cochrane reviews.</p> </div
Meta-analytic power with respect to a 30% relative risk reduction (RRR30), based on the random-effects model, overall and by medical specialty, outcome type and intervention-comparison type.
1<p>Other medical specialties, semi-objective outcomes, subjective outcomes and non-pharmacological interventions defined in footnotes to <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0059202#pone-0059202-t002" target="_blank">Table 2</a>.</p
Percentages of 14,886 meta-analyses including no studies adequately powered to detect a target effect or including at least two adequately powered studies, where adequate power is defined as 80% or 50% in turn; and summary of median power within each meta-analysis.
<p>Percentages of 14,886 meta-analyses including no studies adequately powered to detect a target effect or including at least two adequately powered studies, where adequate power is defined as 80% or 50% in turn; and summary of median power within each meta-analysis.</p
Ratios comparing results obtained from adequately powered studies only with results obtained from all studies, in subset A of 1,107 meta-analyses: results shown are percentiles of the distribution of such ratios across meta-analyses.
1<p> in the all-studies meta-analysis in 256/1107 meta-analyses. In 199/256 (78%), also in the meta-analysis including adequately powered studies only. In 57/256 (22%), increased, but trivially, when underpowered studies were removed.</p
The Use of Bayesian Networks to Assess the Quality of Evidence from Research Synthesis: 1.
<div><p>Background</p><p>The grades of recommendation, assessment, development and evaluation (GRADE) approach is widely implemented in systematic reviews, health technology assessment and guideline development organisations throughout the world. A key advantage to this approach is that it aids transparency regarding judgments on the quality of evidence. However, the intricacies of making judgments about research methodology and evidence make the GRADE system complex and challenging to apply without training.</p><p>Methods</p><p>We have developed a semi-automated quality assessment tool (SAQAT) l based on GRADE. This is informed by responses by reviewers to checklist questions regarding characteristics that may lead to unreliability. These responses are then entered into the Bayesian network to ascertain the probabilities of risk of bias, inconsistency, indirectness, imprecision and publication bias conditional on review characteristics. The model then combines these probabilities to provide a probability for each of the GRADE overall quality categories. We tested the model using a range of plausible scenarios that guideline developers or review authors could encounter.</p><p>Results</p><p>Overall, the model reproduced GRADE judgements for a range of scenarios. Potential advantages over standard assessment are use of explicit and consistent weightings for different review characteristics, forcing consideration of important but sometimes neglected characteristics and principled downgrading where small but important probabilities of downgrading are accrued across domains.</p><p>Conclusions</p><p>Bayesian networks have considerable potential for use as tools to assess the validity of research evidence. The key strength of such networks lies in the provision of a statistically coherent method for combining probabilities across a complex framework based on both belief and evidence. In addition to providing tools for less experienced users to implement reliability assessment, the potential for sensitivity analyses and automation may be beneficial for application and the methodological development of reliability tools.</p></div
Ranking plots for the rheumatoid arthritis network.
<p>Treatments have been ranked (a) according to the surface under the cumulative ranking curves (SUCRA) and (b) according to the unique dimension estimated from multidimensional scaling (MDS) approach. Red points correspond to treatments ranked in different order by the two approaches. (PLA = placebo, ABA = abatacept, ADA = adalimumab, ANA = anakinra, ETA = etanercept, INF = infliximab, RIT = rituximab).</p
A logic map depicting the GRADE framework for assessing strength of evidence.
<p>(This formulation considers risk of bias only for randomised control trial study designs).</p
- …
