765 research outputs found

    Meta-analytic structural equation modeling with moderating effects on SEM parameters

    Get PDF
    Meta-analytic structural equation modeling (MASEM) is an increasingly popular meta-analytic technique that combines the strengths of meta-analysis and structural equation modeling. MASEM facilitates the evaluation of complete theoretical models (e.g., path models or factor analytic models), accounts for sampling covariance between effect sizes, and provides measures of overall fit of the hypothesized model on meta-analytic data. We propose a novel MASEM method, one-stage MASEM, which is better suitable to explain study-level heterogeneity than existing methods. One-stage MASEM allows researchers to incorporate continuous or categorical moderators into the MASEM, in which any parameter in the structural equation model (e.g., path coefficients and factor loadings) can be modeled by the moderator variable, while the method does not require complete data for the primary studies included in the meta-analysis. We illustrate the new method on two real data sets, evaluate its empirical performance via a computer simulation study, and provide user-friendly R-functions and annotated syntax to assist researchers in applying one-stage MASEM. We close the article by presenting several future research directions

    Evaluating cluster-level factor models with lavaan and M<i>plus</i>

    Get PDF
    Background: Researchers frequently use the responses of individuals in clusters to measure cluster-level constructs. Examples are the use of student evaluations to measure teaching quality, or the use of employee ratings of organizational climate. In earlier research, Stapleton and Johnson (2019) provided advice for measuring cluster-level constructs based on a simulation study with inadvertently confounded design factors. We extended their simulation study using both Mplus and lavaan to reveal how their conclusions were dependent on their study conditions. Methods: We generated data sets from the so-called configural model and the simultaneous shared-and-configural model, both with and without nonzero residual variances at the cluster level. We fitted models to these data sets using different maximum likelihood estimation algorithms. Results: Stapleton and Johnson’s results were highly contingent on their confounded design factors. Convergence rates could be very different across algorithms, depending on whether between-level residual variances were zero in the population or in the fitted model. We discovered a worrying convergence issue with the default settings in Mplus, resulting in seemingly converged solutions that are actually not. Rejection rates of the normal-theory test statistic were as expected, while rejection rates of the scaled test statistic were seriously inflated in several conditions. Conclusions: The defaults in Mplus carry specific risks that are easily checked but not well advertised. Our results also shine a different light on earlier advice on the use of measurement models for shared factors
    • …
    corecore