146 research outputs found

    Meta-analysis:Shortcomings and potential

    Get PDF

    Meta-analyzing partial correlation coefficients using Fisher's <i>z</i> transformation

    Get PDF
    The partial correlation coefficient (PCC) is used to quantify the linear relationship between two variables while taking into account/controlling for other variables. Researchers frequently synthesize PCCs in a meta-analysis, but two of the assumptions of the common equal-effect and random-effects meta-analysis model are by definition violated. First, the sampling variance of the PCC cannot assumed to be known, because the sampling variance is a function of the PCC. Second, the sampling distribution of each primary study's PCC is not normal since PCCs are bounded between -1 and 1. I advocate applying the Fisher's z transformation analogous to applying Fisher's z transformation for Pearson correlation coefficients, because the Fisher's z transformed PCC is independent of the sampling variance and its sampling distribution more closely follows a normal distribution. Reproducing a simulation study by Stanley and Doucouliagos and adding meta-analyses based on Fisher's z transformed PCCs shows that the meta-analysis based on Fisher's z transformed PCCs had lower bias and root mean square error than meta-analyzing PCCs. Hence, meta-analyzing Fisher's z transformed PCCs is a viable alternative to meta-analyzing PCCs, and I recommend to accompany any meta-analysis based on PCCs with one using Fisher's z transformed PCCs to assess the robustness of the results

    Publication bias

    Get PDF

    Analyzing data of a Multilab replication project with individual participant data meta-analysis:A tutorial

    Get PDF
    Multilab replication projects such as Registered Replication Reports (RRR) and Many Labs projects are used to replicate an effect in different tabs. Data of these projects are usually analyzed using conventional meta-analysis methods. This is certainly not the best approach because it does not make optimal use of the available data as a summary rather than participant data are analyzed. I propose to analyze data of multilab replication projects with individual participant data (IPD) meta-analysis where the participant data are analyzed directly. The prominent advantages of IPD meta-analysis are that it generally has larger statistical power to detect moderator effects and allows drawing conclusions at the participant and lab level. However, a disadvantage is that IPD meta-analysis is more complex than conventional meta-analysis. In this tutorial, I illustrate IPD meta-analysis using the RRR by McCarthy and colleagues, and 1 provide R code and recommendations to facilitate researchers to apply these methods

    Correcting for outcome reporting bias in a meta-analysis:A meta-regression approach

    Get PDF
    Outcome reporting bias (ORB) refers to the biasing effect caused by researchers selectively reporting outcomes within a study based on their statistical significance. ORB leads to inflated effect size estimates in meta-analysis if only the outcome with the largest effect size is reported due to ORB. We propose a new method (CORB) to correct for ORB that includes an estimate of the variability of the outcomes’ effect size as a moderator in a meta-regression model. An estimate of the variability of the outcomes’ effect size can be computed by assuming a correlation among the outcomes. Results of a Monte-Carlo simulation study showed that the effect size in meta-analyses may be severely overestimated without correcting for ORB. Estimates of CORB are close to the true effect size when overestimation caused by ORB is the largest. Applying the method to a meta-analysis on the effect of playing violent video games on aggression showed that the effect size estimate decreased when correcting for ORB. We recommend to routinely apply methods to correct for ORB in any meta-analysis. We provide annotated R code and functions to help researchers apply the CORB method.</p

    The effect of publication bias on the Q test and assessment of heterogeneity

    Get PDF
    One of the main goals of meta-analysis is to test for and estimate the heterogeneity of effect sizes. We examined the effect of publication bias on the Q test and assessments of heterogeneity as a function of true heterogeneity, publication bias, true effect size, number of studies, and variation of sample sizes. The present study has two main contributions and is relevant to all researchers conducting meta-analysis. First, we show when and how publication bias affects the assessment of heterogeneity. The expected values of heterogeneity measures H² and I² were analytically derived, and the power and Type I error rate of the Q test were examined in a Monte Carlo simulation study. Our results show that the effect of publication bias on the Q test and assessment of heterogeneity is large, complex, and nonlinear. Publication bias can both dramatically decrease and increase heterogeneity in true effect size, particularly if the number of studies is large and population effect size is small. We therefore conclude that the Q test of homogeneity and heterogeneity measures H² and I² are generally not valid when publication bias is present. Our second contribution is that we introduce a web application, Q-sense, which can be used to determine the impact of publication bias on the assessment of heterogeneity within a certain meta-analysis and to assess the robustness of the meta-analytic estimate to publication bias. Furthermore, we apply Q-sense to 2 published meta-analyses, showing how publication bias can result in invalid estimates of effect size and heterogeneity. (PsycINFO Database Record (c) 2018 APA, all rights reserved)

    Do behavioral observations make people catch the goal?:A meta-analysis on goal contagion

    Get PDF
    Goal contagion is a social-cognitive approach to understanding how other people's behavior influences one's goal pursuit: An observation of goal-directed behavior leads to an automatic inference and activation of the goal before it can be adopted and pursued thereafter by the observer. We conducted a meta-analysis focusing on experimental studies with a goal condition, depicting goal-directed behavior and a control condition. We searched four databases (PsychInfo, Web of Science, ScienceDirect, and JSTOR) and the citing literature on Google Scholar, and eventually included e = 48 effects from published studies, unpublished studies and registered reports based on 4751 participants. The meta-analytic summary effect was small - g = 0.30, 95%CI [0.21; 0.40], tau(2) = 0.05, 95%CI [0.03, 0.13] - implying that goal contagion might occur for some people, compared to when this goal is not perceived in behavior. However, the original effect seemed to be biased through the current publication system. As shown by several publication-bias tests, the effect could rather be half the size, for example, selection model: g = 0.15, 95%CI [-0.02; 0.32]. Further, we could not detect any potential moderator (such as the presentation of the manipulation and the contrast of the control condition). We suggest that future research on goal contagion makes use of open science practices to advance research in this domain

    Meta-analyzing the multiverse:A peek under the hood of selective reporting

    Get PDF
    Researcher degrees of freedom refer to arbitrary decisions in the execution and reporting of hypothesis-testing research that allow for many possible outcomes from a single study. Selective reporting of results ( p-hacking) from this “multiverse” of outcomes can inflate effect size estimates and false positive rates. We studied the effects of researcher degrees of freedom and selective reporting using empirical data from extensive multistudy projects in psychology (Registered Replication Reports) featuring 211 samples and 14 dependent variables.We used a counterfactual design to examine what biases could have emerged if the studies (and ensuing meta-analyses) had not been preregistered and could have been subjected to selective reporting based on the significance of the outcomes in the primary studies. Our results show the substantial variability in effect sizes that researcher degrees of freedom can create in relatively standard psychological studies, and how selective reporting of outcomes can alter conclusions and introduce bias in meta-analysis. Despite the typically thousands of outcomes appearing in the multiverses of the 294 included studies, only in about 30%of studies did significant effect sizes in the hypothesized direction emerge.We also observed that the effect of a particular researcher degree of freedom was inconsistent across replication studies using the same protocol, meaning multiverse analyses often fail to replicate across samples.We recommend hypothesis-testing researchers to preregister their preferred analysis and openly report multiverse analysis. We propose a descriptive index (underlying multiverse variability) that quantifies the robustness of results across alternative ways to analyze the data.</p

    Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis

    Get PDF
    <div><p>Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias exists, how strongly it affects different scientific literatures is currently less well-known. We examined evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (CDSR; representing meta-analyses from medicine). Publication bias was assessed on all homogeneous subsets (3.8% of all subsets of meta-analyses published in Psychological Bulletin) of primary studies included in meta-analyses, because publication bias methods do not have good statistical properties if the true effect size is heterogeneous. Publication bias tests did not reveal evidence for bias in the homogeneous subsets. Overestimation was minimal but statistically significant, providing evidence of publication bias that appeared to be similar in both fields. However, a Monte-Carlo simulation study revealed that the creation of homogeneous subsets resulted in challenging conditions for publication bias methods since the number of effect sizes in a subset was rather small (median number of effect sizes equaled 6). Our findings are in line with, in its most extreme case, publication bias ranging from no bias until only 5% statistically nonsignificant effect sizes being published. These and other findings, in combination with the small percentages of statistically significant primary effect sizes (28.9% and 18.9% for subsets published in Psychological Bulletin and CDSR), led to the conclusion that evidence for publication bias in the studied homogeneous subsets is weak, but suggestive of mild publication bias in both psychology and medicine.</p></div

    Reporting guidelines for meta-analysis in economics

    Get PDF
    Meta‐analysis has become the conventional approach to synthesizing the results of empirical economics research. To further improve the transparency and replicability of the reported results and to raise the quality of meta‐analyses, the Meta‐Analysis of Economics Research Network has updated the reporting guidelines that were published by this Journal in 2013. Future meta‐analyses in economics will be expected to follow these updated guidelines or give valid reasons why a meta‐analysis should deviate from them
    corecore