185 research outputs found

    On the exploratory road to unraveling factor loading non-invariance:A new multigroup rotation approach

    Get PDF
    Multigroup exploratory factor analysis (EFA) has gained popularity to address measurement invariance for two reasons. Firstly, repeatedly respecifying confirmatory factor analysis (CFA) models strongly capitalizes on chance and using EFA as a precursor works better. Secondly, the fixed zero loadings of CFA are often too restrictive. In multigroup EFA, factor loading invariance is rejected if the fit decreases significantly when fixing the loadings to be equal across groups. To locate the precise factor loading non-invariances by means of hypothesis testing, the factors’ rotational freedom needs to be resolved per group. In the literature, a solution exists for identifying optimal rotations for one group or invariant loadings across groups. Building on this, we present multigroup factor rotation (MGFR) for identifying loading non-invariances. Specifically, MGFR rotates group-specific loadings both to simple structure and between-group agreement, while disentangling loading differences from differences in the structural model (i.e., factor (co)variances).</p

    Mixture multigroup factor analysis for unraveling factor loading noninvariance across many groups

    Get PDF
    Psychological research often builds on between-group comparisons of (measurements of) latent variables; for instance, to evaluate cross-cultural differences in neuroticism or mindfulness. A critical assumption in such comparative research is that the same latent variable(s) are measured in exactly the same way across all groups (i.e., measurement invariance). Otherwise, one would be comparing apples and oranges. Nowadays, measurement invariance is often tested across a large number of groups by means of multigroup factor analysis. When the assumption is untenable, one may compare group-specific measurement models to pinpoint sources of noninvariance, but the number of pairwise comparisons exponentially increases with the number of groups. This makes it hard to unravel invariances from noninvariances and for which groups they apply, and it elevates the chances of falsely detecting noninvariance. An intuitive solution is clustering the groups into a few clusters based on the measurement model parameters. Therefore, we present mixture multigroup factor analysis (MMG-FA) which clusters the groups according to a specific level of measurement invariance. Specifically, in this article, clusters of groups with metric invariance (i.e., equal factor loadings) are obtained by making the loadings cluster-specific, whereas other parameters (i.e., intercepts, factor (co)variances, residual variances) are still allowed to differ between groups within a cluster. MMG-FA was found to perform well in an extensive simulation study, but a larger sample size within groups is required for recovering more subtle loading differences. Its empirical value is illustrated for data on the social value of emotions and data on emotional acculturation

    How to explore within‑person and between‑person measurement model differences in intensive longitudinal data with the R package lmfa

    Get PDF
    Intensive longitudinal data (ILD) have become popular for studying within-person dynamics in psychological constructs (or between-person differences therein). Before investigating the dynamics, it is crucial to examine whether the measurement model (MM) is the same across subjects and time and, thus, whether the measured constructs have the same meaning. If the MM differs (e.g., because of changes in item interpretation or response styles), observations cannot be validly compared. Exploring differences in the MM for ILD can be done with latent Markov factor analysis (LMFA), which classifies observations based on the underlying MM (for many subjects and time points simultaneously) and thus shows which observations are comparable. However, the complexity of the method or the fact that no open-source software for LMFA existed until now may have hindered researchers from applying the method in practice. In this article, we provide a step-by-step tutorial for the new user-friendly software package lmfa, which allows researchers to easily perform the analysis LMFA in the freely available software R to investigate MM differences in their own ILD

    Scale length does matter:Recommendations for measurement invariance testing with categorical factor analysis and item response theory approaches

    Get PDF
    In social sciences, the study of group differences concerning latent constructs is ubiquitous. These constructs are generally measured by means of scales composed of ordinal items. In order to compare these constructs across groups, one crucial requirement is that they are measured equivalently or, in technical jargon, that measurement invariance (MI) holds across the groups. This study compared the performance of scale- and item-level approaches based on multiple group categorical confirmatory factor analysis (MG-CCFA) and multiple group item response theory (MG-IRT) in testing MI with ordinal data. In general, the results of the simulation studies showed that, MG-CCFA-based approaches outperformed MG-IRT-based approaches when testing MI at the scale level, whereas, at the item level, the best performing approach depends on the tested parameter (i.e., loadings or thresholds). That is, when testing loadings equivalence, the likelihood ratio test provided the best trade-off between true positive rate and false positve rate, whereas, when testing thresholds equivalence, the chi-square test outperformed the other testing strategies. In addition, the performance of MG-CCFA's fit measures, such as RMSEA and CFI, seemed to depend largely on the length of the scale, especially when MI was tested at the item level. General caution is recommended when using these measures, especially when MI is tested for each item individually

    Latent Markov latent trait analysis for exploring measurement model changes in intensive longitudinal data

    Get PDF
    Drawing inferences about dynamics of psychological constructs from intensive longitudinal data requires the measurement model (MM)-indicating how items relate to constructs-to be invariant across subjects and time-points. When assessing subjects in their daily life, however, there may be multiple MMs, for instance, because subjects differ in their item interpretation or because the response style of (some) subjects changes over time. The recently proposed "latent Markov factor analysis" (LMFA) evaluates (violations of) measurement invariance by classifying observations into latent "states" according to the MM underlying these observations such that MMs differ between states but are invariant within one state. However, LMFA is limited to normally distributed continuous data and estimates may be inaccurate when applying the method to ordinal data (e.g., from Likert items) with skewed responses or few response categories. To enable researchers and health professionals with ordinal data to evaluate measurement invariance, we present "latent Markov latent trait analysis" (LMLTA), which builds upon LMFA but treats responses as ordinal. Our application shows differences in MMs of adolescents' affective well-being in different social contexts, highlighting the importance of studying measurement invariance for drawing accurate inferences for psychological science and practice and for further understanding dynamics of psychological constructs

    Awareness is bliss:How acquiescence affects exploratory factor analysis

    Get PDF
    Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurement of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these psychometric properties, where the number of measured constructs (i.e., factors) is assessed, and, afterwards, rotational freedom is resolved to interpret these factors. This study assessed the effects of an acquiescence response style (ARS) on EFA for unidimensional and multidimensional (un)balanced scales. Specifically, we evaluated (i) whether ARS is captured as an additional factor, (ii) the effect of different rotation approaches on the recovery of the content and ARS factors, and (iii) the effect of extracting the additional ARS factor on the recovery of factor loadings. ARS was often captured as an additional factor in balanced scales when it was strong. For these scales, ignoring (i.e., not extracting) this additional ARS factor, or rotating to simple structure when extracting it, harmed the recovery of the original MM by introducing bias in loadings and cross-loadings. These issues were avoided by using informed rotation approaches (i.e., target rotation), where (part of) the MM is specified a priori. Not extracting the additional ARS factor did not affect the loading recovery in unbalanced scales. Researchers should consider the potential presence of an additional ARS factor when assessing the psychometric properties of balanced scales, and use informed rotation approaches when suspecting that an additional factor is an ARS factor

    Scale length does matter:Recommendations for measurement invariance testing with categorical factor analysis and item response theory approaches

    Get PDF
    In social sciences, the study of group differences concerning latent constructs is ubiquitous. These constructs are generally measured by means of scales composed of ordinal items. In order to compare these constructs across groups, one crucial requirement is that they are measured equivalently or, in technical jargon, that measurement invariance (MI) holds across the groups. This study compared the performance of scale- and item-level approaches based on multiple group categorical confirmatory factor analysis (MG-CCFA) and multiple group item response theory (MG-IRT) in testing MI with ordinal data. In general, the results of the simulation studies showed that, MG-CCFA-based approaches outperformed MG-IRT-based approaches when testing MI at the scale level, whereas, at the item level, the best performing approach depends on the tested parameter (i.e., loadings or thresholds). That is, when testing loadings equivalence, the likelihood ratio test provided the best trade-off between true positive rate and false positve rate, whereas, when testing thresholds equivalence, the chi-square test outperformed the other testing strategies. In addition, the performance of MG-CCFA's fit measures, such as RMSEA and CFI, seemed to depend largely on the length of the scale, especially when MI was tested at the item level. General caution is recommended when using these measures, especially when MI is tested for each item individually

    Continuous-time Latent Markov Factor Analysis for exploring measurement model changes across time

    Get PDF
    Drawing valid inferences about daily or long-term dynamics of psychological constructs (e.g., depression) requires the measurement model (indicating which constructs are measured by which items) to be invariant within persons over time. However, it might be affected by time- or situation-specific artifacts (e.g., response styles) or substantive changes in item interpretation. To efficiently evaluate longitudinal measurement invariance, and violations thereof, we proposed Latent Markov factor analysis (LMFA), which clusters observations based on their measurement model into separate states, indicating which measures are validly comparable. LMFA is, however, tailored to “discretetime” data, where measurement intervals are equal, which is often not the case in longitudinal data. In this paper, we extend LMFA to accommodate unequally spaced intervals. The so-called “continuous-time” (CT) approach considers the measurements as snapshots of continuously evolving processes. A simulation study compares CT-LMFA parameter estimation to its discrete-time counterpart and a depression data application shows the advantages of CT-LMFA

    Mixture simultaneous factor analysis for capturing differences in latent variables between higher level units of multilevel data

    Get PDF
    Given multivariate data, many research questions pertain to the covariance structure: whether and how the variables (for example, personality measures) covary. Exploratory factor analysis (EFA) is often used to look for latent variables that may explain the covariances among variables; for example, the Big Five personality structure. In case of multilevel data, one may wonder whether or not the same covariance (factor) structure holds for each so-called ‘data block’ (containing data of one higher-level unit). For instance, is the Big Five personality structure found in each country or do cross-cultural differences exist? The well-known multigroup EFA framework falls short in answering such questions, especially for numerous groups/blocks. We introduce mixture simultaneous factor analysis (MSFA), performing a mixture model clustering of data blocks, based on their factor structure. A simulation study shows excellent results with respect to parameter recovery and an empirical example is included to illustrate the value of MSFA
    • 

    corecore