91 research outputs found
Rethinking the assessment of risk of bias due to selective reporting:a cross-sectional study
BACKGROUND: Selective reporting is included as a core domain of Cochrane’s tool for assessing risk of bias in randomised trials. There has been no evaluation of review authors’ use of this domain. We aimed to evaluate assessments of selective reporting in a cross-section of Cochrane reviews and to outline areas for improvement. METHODS: We obtained data on selective reporting judgements for 8434 studies included in 586 Cochrane reviews published from issue 1–8, 2015. One author classified the reasons for judgements of high risk of selective reporting bias. We randomly selected 100 reviews with at least one trial rated at high risk of outcome non-reporting bias (non-/partial reporting of an outcome on the basis of its results). One author recorded whether the authors of these reviews incorporated the selective reporting assessment when interpreting results. RESULTS: Of the 8434 studies, 1055 (13 %) were rated at high risk of bias on the selective reporting domain. The most common reason was concern about outcome non-reporting bias. Few studies were rated at high risk because of concerns about bias in selection of the reported result (e.g. reporting of only a subset of measurements, analysis methods or subsets of the data that were pre-specified). Review authors often specified in the risk of bias tables the study outcomes that were not reported (84 % of studies) but less frequently specified the outcomes that were partially reported (61 % of studies). At least one study was rated at high risk of outcome non-reporting bias in 31 % of reviews. In the random sample of these reviews, only 30 % incorporated this information when interpreting results, by acknowledging that the synthesis of an outcome was missing data that were not/partially reported. CONCLUSIONS: Our audit of user practice in Cochrane reviews suggests that the assessment of selective reporting in the current risk of bias tool does not work well. It is not always clear which outcomes were selectively reported or what the corresponding risk of bias is in the synthesis with missing outcome data. New tools that will make it easier for reviewers to convey this information are being developed. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s13643-016-0289-2) contains supplementary material, which is available to authorized users
Mapping between measurement scales in meta-analysis, with application to measures of body mass index in children
Quantitative evidence synthesis methods aim to combine data from multiple
medical trials to infer relative effects of different interventions. A
challenge arises when trials report continuous outcomes on different
measurement scales. To include all evidence in one coherent analysis, we
require methods to `map' the outcomes onto a single scale. This is particularly
challenging when trials report aggregate rather than individual data. We are
motivated by a meta-analysis of interventions to prevent obesity in children.
Trials report aggregate measurements of body mass index (BMI) either expressed
as raw values or standardised for age and sex. We develop three methods for
mapping between aggregate BMI data using known relationships between individual
measurements on different scales. The first is an analytical method based on
the mathematical definitions of z-scores and percentiles. The other two
approaches involve sampling individual participant data on which to perform the
conversions. One method is a straightforward sampling routine, while the other
involves optimization with respect to the reported outcomes. In contrast to the
analytical approach, these methods also have wider applicability for mapping
between any pair of measurement scales with known or estimable individual-level
relationships. We verify and contrast our methods using trials from our data
set which report outcomes on multiple scales. We find that all methods recreate
mean values with reasonable accuracy, but for standard deviations, optimization
outperforms the other methods. However, the optimization method is more likely
to underestimate standard deviations and is vulnerable to non-convergence.Comment: Main text: 15 pages, 3 figures, 2 tables Supplementary material: 10
pages, 10 figures, 3 table
Personal financial incentives for changing habitual health-related behaviors: A systematic review and meta-analysis.
OBJECTIVES: Uncertainty remains about whether personal financial incentives could achieve sustained changes in health-related behaviors that would reduce the fast-growing global non-communicable disease burden. This review aims to estimate whether: i. financial incentives achieve sustained changes in smoking, eating, alcohol consumption and physical activity; ii. effectiveness is modified by (a) the target behavior, (b) incentive value and attainment certainty, (c) recipients' deprivation level. METHODS: Multiple sources were searched for trials offering adults financial incentives and assessing outcomes relating to pre-specified behaviors at a minimum of six months from baseline. Analyses included random-effects meta-analyses and meta-regressions grouped by timed endpoints. RESULTS: Of 24,265 unique identified articles, 34 were included in the analysis. Financial incentives increased behavior-change, with effects sustained until 18months from baseline (OR: 1.53, 95% CI 1.05-2.23) and three months post-incentive removal (OR: 2.11, 95% CI 1.21-3.67). High deprivation increased incentive effects (OR: 2.17; 95% CI 1.22-3.85), but only at >6-12months from baseline. Other assessed variables did not independently modify effects at any time-point. CONCLUSIONS: Personal financial incentives can change habitual health-related behaviors and help reduce health inequalities. However, their role in reducing disease burden is potentially limited given current evidence that effects dissipate beyond three months post-incentive removal.This research was funded by the Wellcome Trust as part of a Strategic Award in Biomedical Ethics; program title: “The Centre for the Study of Incentives in Health”; grant number: 086031/Z/08/Z; PI Prof. TM Marteau. The funder did not contribute to any part of this research.This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.ypmed.2015.03.00
Between-trial heterogeneity in meta-analyses may be partially explained by reported design characteristics.
OBJECTIVE: We investigated the associations between risk of bias judgments from Cochrane reviews for sequence generation, allocation concealment and blinding, and between-trial heterogeneity. STUDY DESIGN AND SETTING: Bayesian hierarchical models were fitted to binary data from 117 meta-analyses, to estimate the ratio λ by which heterogeneity changes for trials at high/unclear risk of bias compared with trials at low risk of bias. We estimated the proportion of between-trial heterogeneity in each meta-analysis that could be explained by the bias associated with specific design characteristics. RESULTS: Univariable analyses showed that heterogeneity variances were, on average, increased among trials at high/unclear risk of bias for sequence generation (λˆ 1.14, 95% interval: 0.57-2.30) and blinding (λˆ 1.74, 95% interval: 0.85-3.47). Trials at high/unclear risk of bias for allocation concealment were on average less heterogeneous (λˆ 0.75, 95% interval: 0.35-1.61). Multivariable analyses showed that a median of 37% (95% interval: 0-71%) heterogeneity variance could be explained by trials at high/unclear risk of bias for sequence generation, allocation concealment, and/or blinding. All 95% intervals for changes in heterogeneity were wide and included the null of no difference. CONCLUSION: Our interpretation of the results is limited by imprecise estimates. There is some indication that between-trial heterogeneity could be partially explained by reported design characteristics, and hence adjustment for bias could potentially improve accuracy of meta-analysis results
Data extraction methods for systematic review (semi)automation: Update of a living systematic review [version 2; peer review: 3 approved]
Background: The reliable and usable (semi)automation of data extraction can support the field of systematic review by reducing the workload required to gather information about the conduct and results of the included studies. This living systematic review examines published approaches for data extraction from reports of clinical studies.
Methods: We systematically and continually search PubMed, ACL Anthology, arXiv, OpenAlex via EPPI-Reviewer, and the dblp computer science bibliography. Full text screening and data extraction are conducted within an open-source living systematic review application created for the purpose of this review. This living review update includes publications up to December 2022 and OpenAlex content up to March 2023.
Results: 76 publications are included in this review. Of these, 64 (84%) of the publications addressed extraction of data from abstracts, while 19 (25%) used full texts. A total of 71 (93%) publications developed classifiers for randomised controlled trials. Over 30 entities were extracted, with PICOs (population, intervention, comparator, outcome) being the most frequently extracted. Data are available from 25 (33%), and code from 30 (39%) publications. Six (8%) implemented publicly available tools
Conclusions: This living systematic review presents an overview of (semi)automated data-extraction literature of interest to different types of literature review. We identified a broad evidence base of publications describing data extraction for interventional reviews and a small number of publications extracting epidemiological or diagnostic accuracy data. Between review updates, trends for sharing data and code increased strongly: in the base-review, data and code were available for 13 and 19% respectively, these numbers increased to 78 and 87% within the 23 new publications. Compared with the base-review, we observed another research trend, away from straightforward data extraction and towards additionally extracting relations between entities or automatic text summarisation. With this living review we aim to review the literature continually
Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.
Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd
Association Between Risk-of-Bias Assessments and Results of Randomized Trials in Cochrane Reviews: The ROBES Meta-Epidemiologic Study.
Flaws in the design of randomized trials may bias intervention effect estimates and increase between-trial heterogeneity. Empirical evidence suggests that these problems are greatest for subjectively assessed outcomes. For the Risk of Bias in Evidence Synthesis (ROBES) Study, we extracted risk-of-bias judgements (for sequence generation, allocation concealment, blinding, and incomplete data) from a large collection of meta-analyses published in the Cochrane Library (issue 4; April 2011). We categorized outcome measures as mortality, other objective outcome, or subjective outcome, and we estimated associations of bias judgements with intervention effect estimates using Bayesian hierarchical models. Among 2,443 randomized trials in 228 meta-analyses, intervention effect estimates were, on average, exaggerated in trials with high or unclear (versus low) risk-of-bias judgements for sequence generation (ratio of odds ratios (ROR) = 0.91, 95% credible interval (CrI): 0.86, 0.98), allocation concealment (ROR = 0.92, 95% CrI: 0.86, 0.98), and blinding (ROR = 0.87, 95% CrI: 0.80, 0.93). In contrast to previous work, we did not observe consistently different bias for subjective outcomes compared with mortality. However, we found an increase in between-trial heterogeneity associated with lack of blinding in meta-analyses with subjective outcomes. Inconsistency in criteria for risk-of-bias judgements applied by individual reviewers is a likely limitation of routinely collected bias assessments. Inadequate randomization and lack of blinding may lead to exaggeration of intervention effect estimates in randomized trials
Methodological review of NMA bias concepts provides groundwork for the development of a list of concepts for potential inclusion in a new risk of bias tool for network meta-analysis (RoB NMA Tool)
INTRODUCTION: Network meta-analyses (NMAs) have gained popularity and grown in number due to their ability to provide estimates of the comparative effectiveness of multiple treatments for the same condition. The aim of this study is to conduct a methodological review to compile a preliminary list of concepts related to bias in NMAs. METHODS AND ANALYSIS: We included papers that present items related to bias, reporting or methodological quality, papers assessing the quality of NMAs, or method papers. We searched MEDLINE, the Cochrane Library and unpublished literature (up to July 2020). We extracted items related to bias in NMAs. An item was excluded if it related to general systematic review quality or bias and was included in currently available tools such as ROBIS or AMSTAR 2. We reworded items, typically structured as questions, into concepts (i.e. general notions). RESULTS: One hundred eighty-one articles were assessed in full text and 58 were included. Of these articles, 12 were tools, checklists or journal standards; 13 were guidance documents for NMAs; 27 were studies related to bias or NMA methods; and 6 were papers assessing the quality of NMAs. These studies yielded 99 items of which the majority related to general systematic review quality and biases and were therefore excluded. The 22 items we included were reworded into concepts specific to bias in NMAs. CONCLUSIONS: A list of 22 concepts was included. This list is not intended to be used to assess biases in NMAs, but to inform the development of items to be included in our tool
- …