23 research outputs found

    Coffee or tea?:Examining cross-cultural differences in personality nuances across former colonies of the British Empire

    Get PDF
    Cross-cultural comparisons often focus on differences in broad personality traits across countries. However, many cross-cultural studies report differential item functioning which suggests that considerable group differences are not accounted for by the overarching personality factors. We argue that this may reflect cross-cultural personality differences at a lower level of personality, namely personality nuances. To investigate the degree of cultural similarities and differences between participants of 10 English speaking countries (of which nine formerly belonged to the British Empire), we scrutinized participants’ personality scores on the domain, facet, and nuance level of the personality hierarchy. More specifically, we used the responses of 9110 participants on the IPIP-NEO 300-item personality inventory in cross-validated and regularized logistic regressions. Based on the trait domain and facet scores, we were able to identify the country of residence for 60% and 73% of the participants, respectively. By using the nuance level of personality, we correctly identified the nationality of 89% of the participants. This pattern of results explains the lack of measurement invariance in cross-cultural studies. We discuss implications for cross-cultural personality research and whether the high degree of cross-cultural item-level differences compromises the universality of the personality structure

    Establishing Measurement Equivalence Across Computer- and Paper-Based Tests of Spatial Cognition

    No full text
    Objective: The purpose of the present research is to establish measurement equivalence and test differences in reliability between computerized and pencil-and-paper-based tests of spatial cognition. Background: Researchers have increasingly adopted computerized test formats, but few attempt to establish equivalence for computer-based and paper-based tests. The mixed results in the literature on the test mode effect, which occurs when performance differs as a function of test medium, highlight the need to test for, instead of assume, measurement equivalence. One domain that has been increasingly computerized and is thus in need of tests of measurement equivalence across test mode is spatial cognition. Method: In the present study, 244 undergraduate students completed two measures of spatial ability (i.e., spatial visualization and cross-sectioning) in either computer- or paper-and-pencil-based format. Results: Measurement equivalence was not supported across computer-based and paper-based formats for either spatial test. The results also indicated that test administration type affected the types of errors made on the spatial visualization task, which further highlights the conceptual differences between test mediums. Paper-based tests also demonstrated increased reliability when compared with computerized versions of the tests. Conclusion: The results of the measurement equivalence tests caution against treating computer- and paper-based versions of spatial measures as equivalent. We encourage subsequent work to demonstrate test mode equivalence prior to the utilization of spatial measures because current evidence suggests they may not reliably capture the same construct. Application: The assessment of test type differences may influence the medium in which spatial cognition tests are administered
    corecore