988 research outputs found
Replication in Genome-Wide Association Studies
Replication helps ensure that a genotype-phenotype association observed in a
genome-wide association (GWA) study represents a credible association and is
not a chance finding or an artifact due to uncontrolled biases. We discuss
prerequisites for exact replication, issues of heterogeneity, advantages and
disadvantages of different methods of data synthesis across multiple studies,
frequentist vs. Bayesian inferences for replication, and challenges that arise
from multi-team collaborations. While consistent replication can greatly
improve the credibility of a genotype-phenotype association, it may not
eliminate spurious associations due to biases shared by many studies.
Conversely, lack of replication in well-powered follow-up studies usually
invalidates the initially proposed association, although occasionally it may
point to differences in linkage disequilibrium or effect modifiers across
studies.Comment: Published in at http://dx.doi.org/10.1214/09-STS290 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
True and False Positive Rates for Different Criteria of Evaluating Statistical Evidence from Clinical Trials
Background Until recently a typical rule that has often been used for the endorsement of new medications by the Food and Drug Administration has been the existence of at least two statistically significant clinical trials favoring the new medication. This rule has consequences for the true positive (endorsement of an effective treatment) and false positive rates (endorsement of an ineffective treatment). Methods In this paper, we compare true positive and false positive rates for different evaluation criteria through simulations that rely on (1) conventional p-values; (2) confidence intervals based on meta-analyses assuming fixed or random effects; and (3) Bayes factors. We varied threshold levels for statistical evidence, thresholds for what constitutes a clinically meaningful treatment effect, and number of trials conducted. Results Our results show that Bayes factors, meta-analytic confidence intervals, and p-values often have similar performance. Bayes factors may perform better when the number of trials conducted is high and when trials have small sample sizes and clinically meaningful effects are not small, particularly in fields where the number of non-zero effects is relatively large. Conclusions Thinking about realistic effect sizes in conjunction with desirable levels of statistical evidence, as well as quantifying statistical evidence with Bayes factors may help improve decision-making in some circumstances
Reporting and interpretation of SF-36 outcomes in randomised trials: systematic review
Objective To determine how often health surveys and quality of life evaluations reach different conclusions from those of primary efficacy outcomes and whether discordant results make a difference in the interpretation of trial findings
published research findings are false.
Summary There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research. Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studie
- âŠ