46 research outputs found

    Personality differentiation by cognitive ability:An application of the moderated factor model

    Get PDF
    The personality differentiation hypothesis holds that at higher levels of intellectual ability, personality structure is more differentiated. We tested differentiation at the primary and global factor levels in the US standardisation sample of the 16PF5 (n = 10,261; 5124 male; mean age = 32.69 years (SD = 12.83 years). We used a novel combined item response theory and moderated factor model approach that overcomes many of the limitations of previous tests. We found moderation of latent factor variances in five of the fifteen primary personality traits of the 16PF. At the domain level, we found no evidence of personality differentiation in Extraversion, Self-Control, or Independence. We found evidence of moderated factor loadings consistent with the personality differentiation for Anxiety, and moderated factor loadings consistent with anti-differentiation for Tough-Mindedness. As differentiation was restricted to a few personality factors with small effect sizes, we conclude that there is only very limited support for the personality differentiation hypothesis

    Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results

    Get PDF
    Background The widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. Methods and Findings We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance. Conclusions Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies

    Nonsymbolic and symbolic magnitude comparison skills as longitudinal predictors of mathematical achievement

    Get PDF
    What developmental roles do nonsymbolic (e.g., dot arrays) and symbolic (i.e., Arabic numerals) magnitude comparison skills play in children\u27s mathematics? We assessed a large sample in kindergarten, grade 1 and 2 on two well-known nonsymbolic and symbolic magnitude comparison measures. We also assessed children\u27s initial IQ and developing Working Memory (WM) capacities. Results demonstrated that symbolic and nonsymbolic comparison had different developmental trajectories; the first underwent larger developmental improvements. Both skills were longitudinal predictors of children\u27s future mathematical achievement above and beyond IQ and WM. Nonsymbolic comparison was moderately predictive only in kindergarten. Symbolic comparison, however, was a robust and consistent predictor of future mathematics across all three years. It was a stronger predictor compared to nonsymbolic, and its predictive power at the early stages was even comparable to that of IQ. Furthermore, the present results raise several methodological implications regarding the role of different types of magnitude comparison measures

    Dependence of gene-by-environment interactions (GxE) on scaling:Comparing the use of sum scores, transformed sum scores and IRT scores for the phenotype in tests of GxE interaction

    Get PDF
    Estimates of gene–environment interactions (GxE) in behavior genetic models depend on how a phenotype is scaled. Inappropriately scaled phenotypes result in biased estimates of GxE and can sometimes even suggest GxE in the direction opposite to its true direction. Previously proposed solutions are mathematically complex, computationally demanding and may prove impractical for the substantive researcher. We, therefore, evaluated two simple-to-use alternatives: (1) straightforward non-linear transformation of sum scores and (2) factor scores from an appropriate item response theory (IRT) model. Within Purcell’s (2002) GxM framework, both alternatives provided less biased parameter estimates, and improved false and true positive rates than using a raw sum score. These approaches are, therefore, recommended over using raw sum scores in tests of GxE. Circumstances under which IRT factor scores versus transformed sum scores should be preferred are discussed

    Are Rumination and Worry Two Sides of the Same Coin? A Structural Equation Modelling Approach

    Get PDF
    Abstract Worry and rumination are two types of Repetitive Negative Thinking (RNT) that have been shown to be related to the development and maintenance of emotional problems. Whereas these two forms of RNT have traditionally been regarded as distinct and differentially related to psychopathology, researchers have recently argued that worry and rumination share the same process and show a very similar relationship to different forms of psychopathology. In a series of three studies, we employed a structural equation modelling approach to compare these competing hypotheses. Results showed that a bi-factor model (representing RNT by one latent factor with two uncorrelated method factors) provided a better fit to the data than a two-factor model (with worry and rumination represented by separate factors). In addition, the shared variance within the bi-factor model fully accounted for changes in symptom levels of depression and anxiety in two prospective studies. These findings support a transdiagnostic account of RNT. Implications for theory, measurement and clinical practice are discussed

    Nonsymbolic and symbolic magnitude comparison skills as longitudinal predictors of mathematical achievement

    Get PDF
    What developmental roles do nonsymbolic (e.g., dot arrays) and symbolic (i.e., Arabic numerals) magnitude comparison skills play in children’s mathematics? In the literature, one notices several gaps and contradictory findings. We assessed a large sample in kindergarten, grade 1 and 2 on two well-known nonsymbolic and symbolic magnitude comparison measures. We also assessed children’s initial IQ and developing Working Memory (WM) capacities. Results demonstrated that symbolic and nonsymbolic comparison had different developmental trajectories; the first underwent larger developmental improvements. Both skills were important longitudinal predictors of children’s future mathematical achievement above and beyond IQ and WM. Nonsymbolic comparison was predictive in kindergarten. Symbolic comparison, however, was consistently a stronger predictor of future mathematics compared to nonsymbolic, and its predictive power at the early stages was even comparable to that of IQ. Furthermore, results bring forth methodological implications regarding the role of different types of magnitude comparison measures

    Detecting Specific Genotype by Environment Interactions Using Marginal Maximum Likelihood Estimation in the Classical Twin Design

    Get PDF
    Considerable effort has been devoted to the analysis of genotype by environment (G × E) interactions in various phenotypic domains, such as cognitive abilities and personality. In many studies, environmental variables were observed (measured) variables. In case of an unmeasured environment, van der Sluis et al. (2006) proposed to study heteroscedasticity in the factor model using only MZ twin data. This method is closely related to the Jinks and Fulker (1970) test for G × E, but slightly more powerful. In this paper, we identify four challenges to the investigation of G × E in general, and specifically to the heteroscedasticity approaches of Jinks and Fulker and van der Sluis et al. We propose extensions of these approaches purported to solve these problems. These extensions comprise: (1) including DZ twin data, (2) modeling both A × E and A × C interactions; and (3) extending the univariate approach to a multivariate approach. By means of simulations, we study the power of the univariate method to detect the different G × E interactions in varying situations. In addition, we study how well we could distinguish between A × E, A × C, and C × E. We apply a multivariate version of the extended model to an empirical data set on cognitive abilities

    Modeling Nonlinear Conditional Dependence Between Response Time and Accuracy

    Get PDF
    The most common process variable available for analysis due to tests presented in a computerized form is response time. Psychometric models have been developed for joint modeling of response accuracy and response time in which response time is an additional source of information about ability and about the underlying response processes. While traditional models assume conditional independence between response time and accuracy given ability and speed latent variables (van der Linden, 2007), recently multiple studies (De Boeck and Partchev, 2012; Meng et al., 2015; Bolsinova et al., 2017a,b) have shown that violations of conditional independence are not rare and that there is more to learn from the conditional dependence between response time and accuracy. When it comes to conditional dependence between time and accuracy, authors typically focus on positive conditional dependence (i.e., relatively slow responses are more often correct) and negative conditional dependence (i.e., relatively fast responses are more often correct), which implies monotone conditional dependence. Moreover, most existing models specify the relationship to be linear. However, this assumption of monotone and linear conditional dependence does not necessarily hold in practice, and assuming linearity might distort the conclusions about the relationship between time and accuracy. In this paper we develop methods for exploring nonlinear conditional dependence between response time and accuracy. Three different approaches are proposed: (1) A joint model for quadratic conditional dependence is developed as an extension of the response moderation models for time and accuracy (Bolsinova et al., 2017b); (2) A joint model for multiple-category conditional dependence is developed as an extension of the fast-slow model of Partchev and De Boeck (2012); (3) An indicator-level nonparametric moderation method (Bolsinova and Molenaar, in press) is used with residual log-response time as a predictor for the item intercept and item slope. Furthermore, we propose using nonparametric moderation to evaluate the viability of the assumption of linearity of conditional dependence by performing posterior predictive checks for the linear conditional dependence model. The developed methods are illustrated using data from an educational test in which, for the majority of the items, conditional dependence is shown to be nonlinear
    corecore