27 research outputs found

    Factor Retention in Exploratory Factor Analysis With Missing Data

    Get PDF
    Determining the number of factors in exploratory factor analysis is arguably the most crucial decision a researcher faces when conducting the analysis. While several simulation studies exist that compare various so-called factor retention criteria under different data conditions, little is known about the impact of missing data on this process. Hence, in this study, we evaluated the performance of different factor retention criteria—the Factor Forest, parallel analysis based on a principal component analysis as well as parallel analysis based on the common factor model and the comparison data approach—in combination with different missing data methods, namely an expectation-maximization algorithm called Amelia, predictive mean matching, and random forest imputation within the multiple imputations by chained equations (MICE) framework as well as pairwise deletion with regard to their accuracy in determining the number of factors when data are missing. Data were simulated for different sample sizes, numbers of factors, numbers of manifest variables (indicators), between-factor correlations, missing data mechanisms and proportions of missing values. In the majority of conditions and for all factor retention criteria except the comparison data approach, the missing data mechanism had little impact on the accuracy and pairwise deletion performed comparably well as the more sophisticated imputation methods. In some conditions, especially small-sample cases and when comparison data were used to determine the number of factors, random forest imputation was preferable to other missing data methods, though. Accordingly, depending on data characteristics and the selected factor retention criterion, choosing an appropriate missing data method is crucial to obtain a valid estimate of the number of factors to extract

    Factor retention revised: analyzing current practice and developing new methods

    Get PDF

    Factor retention revised: analyzing current practice and developing new methods

    Get PDF

    Robustness of factor solutions in exploratory factor analysis

    Get PDF

    Introduction to Exploratory Factor Analysis: An Applied Approach

    Get PDF
    This chapter provides an overview of exploratory factor analysis (EFA) from an applied perspective. We start with a discussion of general issues and applications, including definitions of EFA and the underlying common factors model. We briefly cover history and general applications. The most substantive part of the chapter focuses on six steps of EFA. More specifically, we consider variable (or indicator) selection (Step 1), computing the variance–covariance matrix (Step 2), factor-extraction methods (Step 3), factor-retention procedures (Step 4), factor-rotation methods (Step 5), and interpretation (Step 6). We include a data analysis example throughout (with example code for R), with full details in an online supplement. We hope the chapter will provide helpful guidance to applied researchers in the social and behavioral sciences

    Exploratory Factor Analysis Trees: Evaluating Measurement Invariance Between Multiple Covariates

    Get PDF
    Measurement invariance (MI) describes the equivalence of a construct across groups. To be able to meaningfully compare latent factor means between groups, it is crucial to establish MI. Although methods exist that test for MI, these methods do not perform well when many groups have to be compared or when there are no hypotheses about them. We suggest a method called Exploratory Factor Analysis Trees (EFA trees) that are an extension to SEM trees. EFA trees combine EFA with a recursive partitioning algorithm that can uncover non-invariant subgroups in a data-driven manner. An EFA is estimated and then tested for parameter instability on multiple covariates (e.g., age, education, etc.) by a decision tree based method. Our goal is to provide a method with which MI can be addressed in the earliest stages of questionnaire development or prior to analyses between groups. We show how EFA trees can be implemented in the software R using lavaan and partykit. In a simulation, we demonstrate the ability of EFA trees to detect a lack of MI under various conditions. Our online material contains a template script that can be used to apply EFA trees on one’s own questionnaire data. Limitations and future research ideas are discussed

    The comparison data forest: A new comparison data approach to determine the number of factors in exploratory factor analysis

    Get PDF
    Developing psychological assessment instruments often involves exploratory factor analyses, during which one must determine the number of factors to retain. Several factor-retention criteria have emerged that can infer this number from empirical data. Most recently, simulation-based procedures like the comparison data approach have shown the most accurate estimation of dimensionality. The factor forest, an approach combining extensive data simulation and machine learning modeling, showed even higher accuracy across various common data conditions. Because this approach is very computationally costly, we combine the factor forest and the comparison data approach to present the comparison data forest. In an evaluation study, we compared this new method with the common comparison data approach and identified optimal parameter settings for both methods given various data conditions. The new comparison data forest approach achieved slightly higher overall accuracy, though there were some important differences under certain data conditions. The CD approach tended to underfactor and the CDF tended to overfactor, and their results were also complementary in that for the 81.7% of instances when they identified the same number of factors, these results were correct 96.6% of the time

    Evaluating Model Fit of Measurement Models in Confirmatory Factor Analysis

    Get PDF
    Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs. In this study, we review how model fit in CFA is evaluated in psychological research using fit indices and compare the reported values with established cutoff rules. For this, we collected data on all CFA models in Psychological Assessment from the years 2015 to 2020 (Formula presented.). In addition, we reevaluate model fit with newly developed methods that derive fit index cutoffs that are tailored to the respective measurement model and the data characteristics at hand. The results of our review indicate that the model fit in many studies has to be seen critically, especially with regard to the usually imposed independent clusters constraints. In addition, many studies do not fully report all results that are necessary to re-evaluate model fit. We discuss these findings against new developments in model fit evaluation and methods for specification search

    Factor Retention using Machine Learning with Ordinal Data

    No full text
    corecore