614 research outputs found

    Pitfalls of using the risk ratio in meta‐analysis

    Get PDF
    For meta-analysis of studies that report outcomes as binomial proportions, the most popular measure of effect is the odds ratio (OR), usually analyzed as log(OR). Many meta-analyses use the risk ratio (RR) and its logarithm, because of its simpler interpretation. Although log(OR) and log(RR) are both unbounded, use of log(RR) must ensure that estimates are compatible with study-level event rates in the interval (0, 1). These complications pose a particular challenge for random-effects models, both in applications and in generating data for simulations. As background we review the conventional random-effects model and then binomial generalized linear mixed models (GLMMs) with the logit link function, which do not have these complications. We then focus on log-binomial models and explore implications of using them; theoretical calculations and simulation show evidence of biases. The main competitors to the binomial GLMMs use the beta-binomial (BB) distribution, either in BB regression or by maximizing a BB likelihood; a simulation produces mixed results. Two examples and an examination of Cochrane meta-analyses that used RR suggest bias in the results from the conventional inverse-variance-weighted approach. Finally, we comment on other measures of effect that have range restrictions, including risk difference, and outline further research

    Evidence-Based Interventions and Strategies for the Grand Challenges Approach: The Need for Judgement

    Get PDF
    What is the value of evidence-based interventions in addressing “Grand Challenges”? Building upon the EPOS Grand Challenges work (Sakhrani et al., 2017), this paper explores whether evidence-based approaches developed for management and policy are relevant to addressing Grand Challenges. It discusses the criticisms of the Evidence-based Management approach and argues that evidence is a necessary, but not sufficient input in the decisionmaking process of addressing Grand Challenges

    Cluster randomised trials in the medical literature: two bibliometric surveys

    Get PDF
    Background: Several reviews of published cluster randomised trials have reported that about half did not take clustering into account in the analysis, which was thus incorrect and potentially misleading. In this paper I ask whether cluster randomised trials are increasing in both number and quality of reporting. Methods: Computer search for papers on cluster randomised trials since 1980, hand search of trial reports published in selected volumes of the British Medical Journal over 20 years. Results: There has been a large increase in the numbers of methodological papers and of trial reports using the term 'cluster random' in recent years, with about equal numbers of each type of paper. The British Medical Journal contained more such reports than any other journal. In this journal there was a corresponding increase over time in the number of trials where subjects were randomised in clusters. In 2003 all reports showed awareness of the need to allow for clustering in the analysis. In 1993 and before clustering was ignored in most such trials. Conclusion: Cluster trials are becoming more frequent and reporting is of higher quality. Perhaps statistician pressure works

    A frequentist framework of inductive reasoning

    Full text link
    Reacting against the limitation of statistics to decision procedures, R. A. Fisher proposed for inductive reasoning the use of the fiducial distribution, a parameter-space distribution of epistemological probability transferred directly from limiting relative frequencies rather than computed according to the Bayes update rule. The proposal is developed as follows using the confidence measure of a scalar parameter of interest. (With the restriction to one-dimensional parameter space, a confidence measure is essentially a fiducial probability distribution free of complications involving ancillary statistics.) A betting game establishes a sense in which confidence measures are the only reliable inferential probability distributions. The equality between the probabilities encoded in a confidence measure and the coverage rates of the corresponding confidence intervals ensures that the measure's rule for assigning confidence levels to hypotheses is uniquely minimax in the game. Although a confidence measure can be computed without any prior distribution, previous knowledge can be incorporated into confidence-based reasoning. To adjust a p-value or confidence interval for prior information, the confidence measure from the observed data can be combined with one or more independent confidence measures representing previous agent opinion. (The former confidence measure may correspond to a posterior distribution with frequentist matching of coverage probabilities.) The representation of subjective knowledge in terms of confidence measures rather than prior probability distributions preserves approximate frequentist validity.Comment: major revisio

    Results and Outcome Reporting In ClinicalTrials.gov, What Makes it Happen?

    Get PDF
    At the end of the past century there were multiple concerns regarding lack of transparency in the conduct of clinical trials as well as some ethical and scientific issues affecting the trials' design and reporting. In 2000 ClinicalTrials.gov data repository was developed and deployed to serve public and scientific communities with valid data on clinical trials. Later in order to increase deposited data completeness and transparency of medical research a set of restrains had been imposed making the results deposition compulsory for multiple cases.We investigated efficiency of the results deposition and outcome reporting as well as what factors make positive impact on providing information of interest and what makes it more difficult, whether efficiency depends on what kind of institution was a trial sponsor. Data from the ClinicalTrials.gov repository has been classified based on what kind of institution a trial sponsor was. The odds ratio was calculated for results and outcome reporting by different sponsors' class.As of 01/01/2012 118,602 clinical trials data deposits were made to the depository. They came from 9068 different sources. 35344 (29.8%) of them are assigned as FDA regulated and 25151 (21.2%) as Section 801 controlled substances. Despite multiple regulatory requirements, only about 35% of trials had clinical study results deposited, the maximum 55.56% of trials with the results, was observed for trials completed in 2008.The most positive impact on depositing results, the imposed restrains made for hospitals and clinics. Health care companies showed much higher efficiency than other investigated classes both in higher fraction of trials with results and in providing at least one outcome for their trials. They also more often than others deposit results when it is not strictly required, particularly, in the case of non-interventional studies

    Appointment time: Disability and neoliberal workfare temporalities

    Get PDF
    My primary interest in this article is to reveal the complexity of neoliberal temporalities on the lives of disabled people forced to participate in workfare regimes to maintain access to social security measures and programming. Through drawing upon some of the contemporary debates arising within the social study of time, this article explicates what Jessop refers to as the sovereignty of time that has emerged with the global adoption of neoliberal workfare regimes. It is argued that the central role of temporality within the globalizing project of neoliberal workfare and the positioning of disability within these global macro-structural processes requires the sociological imagination to return to both time as a theme and time as a methodology

    Residual confounding after adjustment for age: a minor issue in breast cancer screening effectiveness

    Get PDF
    Residual confounding, after adjustment for age, is the major criticism of observational studies on breast cancer screening effectiveness. We developed realistic scenarios for the prevalence and strength of risk factors on screened and not screened groups, and explored the impact of residual confounding bias. Our results demonstrate that residual confounding bias is a minor issue in screening programme evaluations
    corecore