40 research outputs found
Meta-analyses in psychology often overestimate evidence for and size of effects
Adjusting for publication bias is essential when drawing meta-analytic inferences. However, most methods that adjust for publication bias do not perform well across a range of research conditions, such as the degree of heterogeneity in effect sizes across studies. Sladekova et al. 2022 (Estimating the change in meta-analytic effect size estimates after the application of publication bias adjustment methods. Psychol. Methods) tried to circumvent this complication by selecting the methods that are most appropriate for a given set of conditions, and concluded that publication bias on average causes only minimal over-estimation of effect sizes in psychology. However, this approach suffers from a ‘Catch-22’ problem—to know the underlying research conditions, one needs to have adjusted for publication bias correctly, but to correctly adjust for publication bias, one needs to know the underlying research conditions. To alleviate this problem, we conduct an alternative analysis, robust Bayesian meta-analysis (RoBMA), which is not based on model-selection but on model-averaging. In RoBMA, models that predict the observed results better are given correspondingly larger weights. A RoBMA reanalysis of Sladekova et al.’s dataset reveals that more than 60% of meta-analyses in psychology notably overestimate the evidence for the presence of the meta-analytic effect and more than 50% overestimate its magnitude
Recommended from our members
Estimating the change in meta-analytic effect size estimates after the application of publication bias adjustment methods
Publication bias poses a challenge for accurately synthesising research findings using meta-analysis. A number of statistical methods have been developed to combat this problem by adjusting the meta-analytic estimates. Previous studies tended to apply these methods without regard to optimal conditions for each method’s performance. The present study sought to estimate the typical effect size attenuation of these methods when they are applied to real meta-analytic datasets that match the conditions under which each method is known to remain relatively unbiased (such as sample size, level of heterogeneity, population effect size, and the level of publication bias). 433 datasets from 90 papers published in psychology journals were reanalysed using a selection of publication bias adjustment methods. The downward adjustment found in our sample was minimal, with greatest identified attenuation of b = –0.032, 95% Highest Posterior Density interval (HPD) ranging from –0.055 to –0.009, for the Precision Effect Test (PET). Some methods tended to adjust upwards, and this was especially true for datasets with a sample size smaller than ten. We propose that researchers should seek to explore the full range of plausible estimates for the effects they are studying and note that these methods may not be able to combat bias in small samples (with less than ten primary studies). We argue that although the effect size attenuation we found tended to be minimal, this should not be taken as an indication of low levels of publication bias in psychology. We discuss the findings with reference to new developments in Bayesian methods for publication bias adjustment, and the recent methodological reforms in psychology
Footprint of publication selection bias on meta-analyses in medicine, environmental sciences, psychology, and economics
Publication selection bias undermines the systematic accumulation of evidence. To assess the extent of this problem, we survey over 68,000 meta-analyses containing over 700,000 effect size estimates from medicine (67,386/597,699), environmental sciences (199/12,707), psychology (605/23,563), and economics (327/91,421). Our results indicate that meta-analyses in economics are the most severely contaminated by publication selection bias, closely followed by meta-analyses in environmental sciences and psychology, whereas meta-analyses in medicine are contaminated the least. After adjusting for publication selection bias, the median probability of the presence of an effect decreased from 99.9% to 29.7% in economics, from 98.9% to 55.7% in psychology, from 99.8% to 70.7% in environmental sciences, and from 38.0% to 29.7% in medicine. The median absolute effect sizes (in terms of standardized mean differences) decreased from d = 0.20 to d = 0.07 in economics, from d = 0.37 to d = 0.26 in psychology, from d = 0.62 to d = 0.43 in environmental sciences, and from d = 0.24 to d = 0.13 in medicine
Footprint of publication selection bias on meta-analyses in medicine, environmental sciences, psychology, and economics
Publication selection bias undermines the systematic accumulation of
evidence. To assess the extent of this problem, we survey over 68,000
meta-analyses containing over 700,000 effect size estimates from medicine
(67,386/597,699), environmental sciences (199/12,707), psychology (605/23,563),
and economics (327/91,421). Our results indicate that meta-analyses in
economics are the most severely contaminated by publication selection bias,
closely followed by meta-analyses in environmental sciences and psychology,
whereas meta-analyses in medicine are contaminated the least. After adjusting
for publication selection bias, the median probability of the presence of an
effect decreased from 99.9% to 29.7% in economics, from 98.9% to 55.7% in
psychology, from 99.8% to 70.7% in environmental sciences, and from 38.0% to
29.7% in medicine. The median absolute effect sizes (in terms of standardized
mean differences) decreased from d = 0.20 to d = 0.07 in economics, from d =
0.37 to d = 0.26 in psychology, from d = 0.62 to d = 0.43 in environmental
sciences, and from d = 0.24 to d = 0.13 in medicine
Teaching open and reproducible scholarship: A critical review of the evidence base for current pedagogical methods and their outcomes
In recent years, the scientific community has called for improvements in the credibility, robustness and reproducibility of research, characterized by increased interest and promotion of open and transparent research practices. While progress has been positive, there is a lack of consideration about how this approach can be embedded into undergraduate and postgraduate research training. Specifically, a critical overview of the literature which investigates how integrating open and reproducible science may influence student outcomes is needed. In this paper, we provide the first critical review of literature surrounding the integration of open and reproducible scholarship into teaching and learning and its associated outcomes in students. Our review highlighted how embedding open and reproducible scholarship appears to be associated with (i) students' scientific literacies (i.e. students’ understanding of open research, consumption of science and the development of transferable skills); (ii) student engagement (i.e. motivation and engagement with learning, collaboration and engagement in open research) and (iii) students' attitudes towards science (i.e. trust in science and confidence in research findings). However, our review also identified a need for more robust and rigorous methods within pedagogical research, including more interventional and experimental evaluations of teaching practice. We discuss implications for teaching and learning scholarship
Recommended from our members
Teaching open and reproducible scholarship: a critical review of the evidence base for current pedagogical methods and their outcomes
In recent years, the scientific community has called for improvements in the credibility, robustness, and reproducibility of research, characterized by increased interest and promotion of open and transparent research practices. While progress has been positive, there is a lack of consideration about how this approach can be embedded into undergraduate and postgraduate research training. Specifically, a critical overview of the literature which investigates how integrating open and reproducible science may influence student outcomes is needed. In this paper, we provide the first critical review of literature surrounding the integration of open and reproducible scholarship into teaching and learning and its associated outcomes in students. Our review highlighted how embedding open and reproducible scholarship appears to be associated with (1) students’ scientific literacies (i.e., students’ understanding of open research, consumption of science, and the development of transferable skills); (2) student engagement (i.e., motivation and engagement with learning, collaboration, and engagement in open research), and (3) students’ attitudes towards science (i.e., trust in science and confidence in research findings). However, our review also identified a need for more robust and rigorous methods within pedagogical research, including more interventional and experimental evaluations of teaching practice. We discuss implications for teaching and learning scholarship
pb_methods_analysis_osf
This folder contains all the R project files. To download all the files at once, download the compressed folder pb_methods_analysis_osf.zi