47 research outputs found

    Meta-analyzing partial correlation coefficients using Fisher's <i>z</i> transformation

    Get PDF
    The partial correlation coefficient (PCC) is used to quantify the linear relationship between two variables while taking into account/controlling for other variables. Researchers frequently synthesize PCCs in a meta-analysis, but two of the assumptions of the common equal-effect and random-effects meta-analysis model are by definition violated. First, the sampling variance of the PCC cannot assumed to be known, because the sampling variance is a function of the PCC. Second, the sampling distribution of each primary study's PCC is not normal since PCCs are bounded between -1 and 1. I advocate applying the Fisher's z transformation analogous to applying Fisher's z transformation for Pearson correlation coefficients, because the Fisher's z transformed PCC is independent of the sampling variance and its sampling distribution more closely follows a normal distribution. Reproducing a simulation study by Stanley and Doucouliagos and adding meta-analyses based on Fisher's z transformed PCCs shows that the meta-analysis based on Fisher's z transformed PCCs had lower bias and root mean square error than meta-analyzing PCCs. Hence, meta-analyzing Fisher's z transformed PCCs is a viable alternative to meta-analyzing PCCs, and I recommend to accompany any meta-analysis based on PCCs with one using Fisher's z transformed PCCs to assess the robustness of the results

    Analyzing data of a Multilab replication project with individual participant data meta-analysis:A tutorial

    Get PDF
    Multilab replication projects such as Registered Replication Reports (RRR) and Many Labs projects are used to replicate an effect in different tabs. Data of these projects are usually analyzed using conventional meta-analysis methods. This is certainly not the best approach because it does not make optimal use of the available data as a summary rather than participant data are analyzed. I propose to analyze data of multilab replication projects with individual participant data (IPD) meta-analysis where the participant data are analyzed directly. The prominent advantages of IPD meta-analysis are that it generally has larger statistical power to detect moderator effects and allows drawing conclusions at the participant and lab level. However, a disadvantage is that IPD meta-analysis is more complex than conventional meta-analysis. In this tutorial, I illustrate IPD meta-analysis using the RRR by McCarthy and colleagues, and 1 provide R code and recommendations to facilitate researchers to apply these methods

    The effect of publication bias on the Q test and assessment of heterogeneity

    Get PDF
    One of the main goals of meta-analysis is to test for and estimate the heterogeneity of effect sizes. We examined the effect of publication bias on the Q test and assessments of heterogeneity as a function of true heterogeneity, publication bias, true effect size, number of studies, and variation of sample sizes. The present study has two main contributions and is relevant to all researchers conducting meta-analysis. First, we show when and how publication bias affects the assessment of heterogeneity. The expected values of heterogeneity measures H² and I² were analytically derived, and the power and Type I error rate of the Q test were examined in a Monte Carlo simulation study. Our results show that the effect of publication bias on the Q test and assessment of heterogeneity is large, complex, and nonlinear. Publication bias can both dramatically decrease and increase heterogeneity in true effect size, particularly if the number of studies is large and population effect size is small. We therefore conclude that the Q test of homogeneity and heterogeneity measures H² and I² are generally not valid when publication bias is present. Our second contribution is that we introduce a web application, Q-sense, which can be used to determine the impact of publication bias on the assessment of heterogeneity within a certain meta-analysis and to assess the robustness of the meta-analytic estimate to publication bias. Furthermore, we apply Q-sense to 2 published meta-analyses, showing how publication bias can result in invalid estimates of effect size and heterogeneity. (PsycINFO Database Record (c) 2018 APA, all rights reserved)

    Do behavioral observations make people catch the goal?:A meta-analysis on goal contagion

    Get PDF
    Goal contagion is a social-cognitive approach to understanding how other people's behavior influences one's goal pursuit: An observation of goal-directed behavior leads to an automatic inference and activation of the goal before it can be adopted and pursued thereafter by the observer. We conducted a meta-analysis focusing on experimental studies with a goal condition, depicting goal-directed behavior and a control condition. We searched four databases (PsychInfo, Web of Science, ScienceDirect, and JSTOR) and the citing literature on Google Scholar, and eventually included e = 48 effects from published studies, unpublished studies and registered reports based on 4751 participants. The meta-analytic summary effect was small - g = 0.30, 95%CI [0.21; 0.40], tau(2) = 0.05, 95%CI [0.03, 0.13] - implying that goal contagion might occur for some people, compared to when this goal is not perceived in behavior. However, the original effect seemed to be biased through the current publication system. As shown by several publication-bias tests, the effect could rather be half the size, for example, selection model: g = 0.15, 95%CI [-0.02; 0.32]. Further, we could not detect any potential moderator (such as the presentation of the manipulation and the contrast of the control condition). We suggest that future research on goal contagion makes use of open science practices to advance research in this domain

    Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis

    Get PDF
    <div><p>Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias exists, how strongly it affects different scientific literatures is currently less well-known. We examined evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (CDSR; representing meta-analyses from medicine). Publication bias was assessed on all homogeneous subsets (3.8% of all subsets of meta-analyses published in Psychological Bulletin) of primary studies included in meta-analyses, because publication bias methods do not have good statistical properties if the true effect size is heterogeneous. Publication bias tests did not reveal evidence for bias in the homogeneous subsets. Overestimation was minimal but statistically significant, providing evidence of publication bias that appeared to be similar in both fields. However, a Monte-Carlo simulation study revealed that the creation of homogeneous subsets resulted in challenging conditions for publication bias methods since the number of effect sizes in a subset was rather small (median number of effect sizes equaled 6). Our findings are in line with, in its most extreme case, publication bias ranging from no bias until only 5% statistically nonsignificant effect sizes being published. These and other findings, in combination with the small percentages of statistically significant primary effect sizes (28.9% and 18.9% for subsets published in Psychological Bulletin and CDSR), led to the conclusion that evidence for publication bias in the studied homogeneous subsets is weak, but suggestive of mild publication bias in both psychology and medicine.</p></div

    A many-analysts approach to the relation between religiosity and well-being

    Get PDF
    The relation between religiosity and well-being is one of the most researched topics in the psychology of religion, yet the directionality and robustness of the effect remains debated. Here, we adopted a many-analysts approach to assess the robustness of this relation based on a new cross-cultural dataset (N=10,535 participants from 24 countries). We recruited 120 analysis teams to investigate (1) whether religious people self-report higher well-being, and (2) whether the relation between religiosity and self-reported well-being depends on perceived cultural norms of religion (i.e., whether it is considered normal and desirable to be religious in a given country). In a two-stage procedure, the teams first created an analysis plan and then executed their planned analysis on the data. For the first research question, all but 3 teams reported positive effect sizes with credible/confidence intervals excluding zero (median reported β=0.120). For the second research question, this was the case for 65% of the teams (median reported β=0.039). While most teams applied (multilevel) linear regression models, there was considerable variability in the choice of items used to construct the independent variables, the dependent variable, and the included covariates

    A Many-analysts Approach to the Relation Between Religiosity and Well-being

    Get PDF
    The relation between religiosity and well-being is one of the most researched topics in the psychology of religion, yet the directionality and robustness of the effect remains debated. Here, we adopted a many-analysts approach to assess the robustness of this relation based on a new cross-cultural dataset (N = 10, 535 participants from 24 countries). We recruited 120 analysis teams to investigate (1) whether religious people self-report higher well-being, and (2) whether the relation between religiosity and self-reported well-being depends on perceived cultural norms of religion (i.e., whether it is considered normal and desirable to be religious in a given country). In a two-stage procedure, the teams first created an analysis plan and then executed their planned analysis on the data. For the first research question, all but 3 teams reported positive effect sizes with credible/confidence intervals excluding zero (median reported β = 0.120). For the second research question, this was the case for 65% of the teams (median reported β = 0.039). While most teams applied (multilevel) linear regression models, there was considerable variability in the choice of items used to construct the independent variables, the dependent variable, and the included covariates

    Many Labs 2: Investigating Variation in Replicability Across Samples and Settings

    Get PDF
    We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely highpowered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.UCR::Vicerrectoría de Investigación::Unidades de Investigación::Ciencias Sociales::Instituto de Investigaciones Psicológicas (IIP

    Analyzing data of a Multilab replication project with individual participant data meta-analysis: A tutorial

    No full text
    Multilab replication projects such as Registered Replication Reports (RRR) and Many Labs projects are used to replicate an effect in different tabs. Data of these projects are usually analyzed using conventional meta-analysis methods. This is certainly not the best approach because it does not make optimal use of the available data as a summary rather than participant data are analyzed. I propose to analyze data of multilab replication projects with individual participant data (IPD) meta-analysis where the participant data are analyzed directly. The prominent advantages of IPD meta-analysis are that it generally has larger statistical power to detect moderator effects and allows drawing conclusions at the participant and lab level. However, a disadvantage is that IPD meta-analysis is more complex than conventional meta-analysis. In this tutorial, I illustrate IPD meta-analysis using the RRR by McCarthy and colleagues, and 1 provide R code and recommendations to facilitate researchers to apply these methods

    Bayesian hypothesis testing and estimation under the marginalized random-effects meta-analysis model

    Get PDF
    Meta-analysis methods are used to synthesize results of multiple studies on the same topic. The most frequently used statistical model in meta-analysis is the random-effects model containing parameters for the overall effect, between-study variance in primary study's true effect size, and random effects for the study specific effects. We propose Bayesian hypothesis testing and estimation methods using the marginalized random-effects meta-analysis (MAREMA) model where the study specific true effects are regarded as nuisance parameters which are integrated out of the model. We propose to use a flat prior distribution on the overall effect size in case of estimation and a proper unit information prior for the overall effect size in case of hypothesis testing. For the between-study variance (which can attain negative values under the MAREMA model), a proper uniform prior is placed on the proportion of total variance that can be attributed to between-study variability. Bayes factors are used for hypothesis testing that allow testing point and one-sided hypotheses. The proposed methodology has several attractive properties. First, the proposed MAREMA model encompasses models with a zero, negative, and positive between-study variance, which enables testing a zero between-study variance as it is not a boundary problem. Second, the methodology is suitable for default Bayesian meta-analyses as it requires no prior information about the unknown parameters. Third, the proposed Bayes factors can even be used in the extreme case when only two studies are available because Bayes factors are not based on large sample theory. We illustrate the developed methods by applying it to two meta-analyses and introduce easy-to-use software in the R package BFpack to compute the proposed Bayes factors
    corecore