1,003 research outputs found
Industry sponsorship bias in research findings: a network meta-analysis of LDL cholesterol reduction in randomised trials of statins
Objective: To explore the risk of industry sponsorship bias in a systematically identified set of placebo controlled and active comparator trials of statins. Design: Systematic review and network meta-analysis. Eligibility Open label and double blind randomised controlled trials comparing one statin with another at any dose or with control (placebo, diet, or usual care) for adults with, or at risk of developing, cardiovascular disease. Only trials that lasted longer than four weeks with more than 50 participants per trial arm were included. Two investigators assessed study eligibility. Data sources Bibliographic databases and reference lists of relevant articles published between 1 January 1985 and 10 March 2013. Data extraction One investigator extracted data and another confirmed accuracy. Main outcome measure Mean absolute change from baseline concentration of low density lipoprotein (LDL) cholesterol. Data synthesis Study level outcomes from randomised trials were combined using random effects network meta-analyses. Results: We included 183 randomised controlled trials of statins, 103 of which were two-armed or multi-armed active comparator trials. When all of the existing randomised evidence was synthesised in network meta-analyses, there were clear differences in the LDL cholesterol lowering effects of individual statins at different doses. In general, higher doses resulted in higher reductions in baseline LDL cholesterol levels. Of a total of 146 industry sponsored trials, 64 were placebo controlled (43.8%). The corresponding number for the non-industry sponsored trials was 16 (43.2%). Of the 35 unique comparisons available in 37 non-industry sponsored trials, 31 were also available in industry sponsored trials. There were no systematic differences in magnitude between the LDL cholesterol lowering effects of individual statins observed in industry sponsored versus non-industry sponsored trials. In industry sponsored trials, the mean change from baseline LDL cholesterol level was on average 1.77 mg/dL (95% credible interval −11.12 to 7.66) lower than the change observed in non-industry sponsored trials. There was no detectable inconsistency in the evidence network. Conclusions: Our analysis shows that the findings obtained from industry sponsored statin trials seem similar in magnitude as those in non-industry sources. There are actual differences in the effectiveness of individual statins at various doses that explain previously observed discrepancies between industry and non-industry sponsored trials
Linguistics
Contains reports on four research projects.National Institute of Mental Health (Grant 1 PO1 MH-13390-04
Mapping between measurement scales in meta-analysis, with application to measures of body mass index in children
Quantitative evidence synthesis methods aim to combine data from multiple
medical trials to infer relative effects of different interventions. A
challenge arises when trials report continuous outcomes on different
measurement scales. To include all evidence in one coherent analysis, we
require methods to `map' the outcomes onto a single scale. This is particularly
challenging when trials report aggregate rather than individual data. We are
motivated by a meta-analysis of interventions to prevent obesity in children.
Trials report aggregate measurements of body mass index (BMI) either expressed
as raw values or standardised for age and sex. We develop three methods for
mapping between aggregate BMI data using known relationships between individual
measurements on different scales. The first is an analytical method based on
the mathematical definitions of z-scores and percentiles. The other two
approaches involve sampling individual participant data on which to perform the
conversions. One method is a straightforward sampling routine, while the other
involves optimization with respect to the reported outcomes. In contrast to the
analytical approach, these methods also have wider applicability for mapping
between any pair of measurement scales with known or estimable individual-level
relationships. We verify and contrast our methods using trials from our data
set which report outcomes on multiple scales. We find that all methods recreate
mean values with reasonable accuracy, but for standard deviations, optimization
outperforms the other methods. However, the optimization method is more likely
to underestimate standard deviations and is vulnerable to non-convergence.Comment: Main text: 15 pages, 3 figures, 2 tables Supplementary material: 10
pages, 10 figures, 3 table
Automated generation of node-splitting models for assessment of inconsistency in network meta-analysis
Network meta-analysis enables the simultaneous synthesis of a network of clinical trials comparing any number of treatments. Potential inconsistencies between estimates of relative treatment effects are an important concern, and several methods to detect inconsistency have been proposed. This paper is concerned with the node-splitting approach, which is particularly attractive because of its straightforward interpretation, contrasting estimates from both direct and indirect evidence. However, node-splitting analyses are labour-intensive because each comparison of interest requires a separate model. It would be advantageous if node-splitting models could be estimated automatically for all comparisons of interest. We present an unambiguous decision rule to choose which comparisons to split, and prove that it selects only comparisons in potentially inconsistent loops in the network, and that all potentially inconsistent loops in the network are investigated. Moreover, the decision rule circumvents problems with the parameterisation of multi-arm trials, ensuring that model generation is trivial in all cases. Thus, our methods eliminate most of the manual work involved in using the node-splitting approach, enabling the analyst to focus on interpreting the results. (C) 2015 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd
Evidence Synthesis for Decision Making 6:Embedding Evidence Synthesis in Probabilistic Cost-effectiveness Analysis
When multiple parameters are estimated from the same synthesis model, it is likely that correlations will be induced between them. Network meta-analysis (mixed treatment comparisons) is one example where such correlations occur, along with meta-regression and syntheses involving multiple related outcomes. These correlations may affect the uncertainty in incremental net benefit when treatment options are compared in a probabilistic decision model, and it is therefore essential that methods are adopted that propagate the joint parameter uncertainty, including correlation structure, through the cost-effectiveness model. This tutorial paper sets out 4 generic approaches to evidence synthesis that are compatible with probabilistic cost-effectiveness analysis. The first is evidence synthesis by Bayesian posterior estimation and posterior sampling where other parameters of the cost-effectiveness model can be incorporated into the same software platform. Bayesian Markov chain Monte Carlo simulation methods with WinBUGS software are the most popular choice for this option. A second possibility is to conduct evidence synthesis by Bayesian posterior estimation and then export the posterior samples to another package where other parameters are generated and the cost-effectiveness model is evaluated. Frequentist methods of parameter estimation followed by forward Monte Carlo simulation from the maximum likelihood estimates and their variance-covariance matrix represent’a third approach. A fourth option is bootstrap resampling—a frequentist simulation approach to parameter uncertainty. This tutorial paper also provides guidance on how to identify situations in which no correlations exist and therefore simpler approaches can be adopted. Software suitable for transferring data between different packages, and software that provides a user-friendly interface for integrated software platforms, offering investigators a flexible way of examining alternative scenarios, are reviewed
Evidence Synthesis for Decision Making 2:A Generalized Linear Modeling Framework for Pairwise and Network Meta-analysis of Randomized Controlled Trials
We set out a generalized linear model framework for the synthesis of data from randomized controlled trials. A common model is described, taking the form of a linear regression for both fixed and random effects synthesis, which can be implemented with normal, binomial, Poisson, and multinomial data. The familiar logistic model for meta-analysis with binomial data is a generalized linear model with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes by using other link functions, such as identity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pairwise meta-analysis, indirect comparisons, synthesis of multiarm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction. We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different randomized controlled trials report outcomes in different formats but from a common underlying model. Use of the generalized linear model framework allows us to present a unified account of how models can be compared using the deviance information criterion and how goodness of fit can be assessed using the residual deviance. The approach is illustrated through a range of worked examples for commonly encountered evidence formats
Evidence Synthesis for Decision Making 5:The Baseline Natural History Model
Most cost-effectiveness analyses consist of a baseline model that represents the absolute natural history under a standard treatment in a comparator set and a model for relative treatment effects. We review synthesis issues that arise on the construction of the baseline natural history model. We cover both the absolute response to treatment on the outcome measures on which comparative effectiveness is defined and the other elements of the natural history model, usually “downstream” of the shorter-term effects reported in trials. We recommend that the same framework be used to model the absolute effects of a “standard treatment” or placebo comparator as that used for synthesis of relative treatment effects and that the baseline model is constructed independently from the model for relative treatment effects, to ensure that the latter are not affected by assumptions made about the baseline. However, simultaneous modeling of baseline and treatment effects could have some advantages when evidence is very sparse or when other research or study designs give strong reasons for believing in a particular baseline model. The predictive distribution, rather than the fixed effect or random effects mean, should be used to represent the baseline to reflect the observed variation in baseline rates. Joint modeling of multiple baseline outcomes based on data from trials or combinations of trial and observational data is recommended where possible, as this is likely to make better use of available evidence, produce more robust results, and ensure that the model is internally coherent
- …