1,071,665 research outputs found

    Publication Bias Against Null Results

    Get PDF
    Studies suggest a bias against the publication of null (p > .05) results. Instead of significance, we advocate reporting effect sizes and confidence intervals, and using replication studies. If statistical tests are used, power tests should accompany them.publication, bias, null results

    Publication bias

    Get PDF

    Identification of and correction for publication bias

    Full text link
    Some empirical results are more likely to be published than others. Such selective publication leads to biased estimates and distorted inference. This paper proposes two approaches for identifying the conditional probability of publication as a function of a study's results, the first based on systematic replication studies and the second based on meta-studies. For known conditional publication probabilities, we propose median-unbiased estimators and associated confidence sets that correct for selective publication. We apply our methods to recent large-scale replication studies in experimental economics and psychology, and to meta-studies of the effects of minimum wages and de-worming programs

    Effects of publication bias on conservation planning

    Full text link
    Conservation planning needs reliable information on spatial patterns of biodiversity. However, existing data sets are skewed: some habitats, taxa, and locations are under-represented. Here, we map geographic publication density at the sub-national scale of individual 'provinces'. We query the Web of Science catalogues SCI and SSCI for biodiversity-related publications including country and province names (for the period 1993-2016). We combine these data with other provincial-scale factors hypothesised to affect research (i.e. economic development, human presence, infrastructure and remoteness). We show that sites that appear to be understudied, compared with the biodiversity expected from their bioclimatic conditions, are likely to have been inaccessible to researchers for a diversity of reasons amongst which current or recent armed conflicts are notable. Finally, we create a priority list of provinces where geographic publication bias is of most concern, and discuss how our provincial-scale model can assist in adjusting for publication biases in conservation planning.Comment: 10 pages; 3 figures; 1 table;R code on https://github.com/raffael-hickisch; data at https://zenodo.org/record/998889; interactive at http://bit.ly/publication_density_ma

    Is the Time-Series Evidence on Minimum Wage Effects Contaminated by Publication Bias?

    Get PDF
    Publication bias in economics may lead to selective specification searches that result in overreporting in the published literature of results consistent with economists' priors. In reassessing the published time-series studies on the employment effects of minimum wages, some recent research has reported evidence consistent with publication bias, and concluded that the most plausible explanation of this evidence is editors' and authors' tendencies to look for negative and statistically significant estimates of the employment effect of the minimum wage, (Card and Krueger, 1995a, p. 242). We present results indicating that the evidence is more consistent with a change in the estimated minimum wage effect over time than with publication bias. More generally, we demonstrate that existing approaches to testing for publication bias may generate spurious evidence of such bias when there are structural changes in some parameters. We then suggest an alternative strategy for testing for publication bias that is more immune to structural change. Although changing parameters may be uncommon in clinical trials on which most of the existing literature on publication bias is based, they are much more plausible in economics.

    American trade policy towards Sub Saharan Africa –- a meta analysis of AGOA

    Get PDF
    Twelve econometric studies investigating the impact of agoa presented in this paper have reported 174 different estimates. In testing for publication bias and whether there is a genuine empirical impact of agoa we resort to a meta-analysis. The meta-analysis provides us with a formal means of testing for publication bias and an empirical effect. The result shows significant publication bias in the selected studies. However, in a few cases the test for a genuine effect is passed successfully. The results of the meta-analysis indicates that agoa increased the trade of beneficiaries by 13.2%

    Assessment of publication bias and outcome reporting bias in systematic reviews of health services and delivery research:A meta-epidemiological study

    Get PDF
    Strategies to identify and mitigate publication bias and outcome reporting bias are frequently adopted in systematic reviews of clinical interventions but it is not clear how often these are applied in systematic reviews relating to quantitative health services and delivery research (HSDR). We examined whether these biases are mentioned and/or otherwise assessed in HSDR systematic reviews, and evaluated associating factors to inform future practice. We randomly selected 200 quantitative HSDR systematic reviews published in the English language from 2007-2017 from the Health Systems Evidence database (www.healthsystemsevidence.org). We extracted data on factors that may influence whether or not authors mention and/or assess publication bias or outcome reporting bias. We found that 43% (n = 85) of the reviews mentioned publication bias and 10% (n = 19) formally assessed it. Outcome reporting bias was mentioned and assessed in 17% (n = 34) of all the systematic reviews. Insufficient number of studies, heterogeneity and lack of pre-registered protocols were the most commonly reported impediments to assessing the biases. In multivariable logistic regression models, both mentioning and formal assessment of publication bias were associated with: inclusion of a meta-analysis; being a review of intervention rather than association studies; higher journal impact factor, and; reporting the use of systematic review guidelines. Assessment of outcome reporting bias was associated with: being an intervention review; authors reporting the use of Grading of Recommendations, Assessment, Development and Evaluations (GRADE), and; inclusion of only controlled trials. Publication bias and outcome reporting bias are infrequently assessed in HSDR systematic reviews. This may reflect the inherent heterogeneity of HSDR evidence and different methodological approaches to synthesising the evidence, lack of awareness of such biases, limits of current tools and lack of pre-registered study protocols for assessing such biases. Strategies to help raise awareness of the biases, and methods to minimise their occurrence and mitigate their impacts on HSDR systematic reviews, are needed
    corecore