66 research outputs found

    Ten steps toward a better personality science - How quality may be rewarded more in research evaluation

    Get PDF
    This target article is part of a theme bundle including open peer commentaries (https://doi.org/10.5964/ps.9227) and a rejoinder by the authors (https://doi.org/10.5964/ps.7961). We point out ten steps that we think will go a long way in improving personality science. The first five steps focus on fostering consensus regarding (1) research goals, (2) terminology, (3) measurement practices, (4) data handling, and (5) the current state of theory and evidence. The other five steps focus on improving the credibility of empirical research, through (6) formal modelling, (7) mandatory pre-registration for confirmatory claims, (8) replication as a routine practice, (9) planning for informative studies (e.g., in terms of statistical power), and (10) making data, analysis scripts, and materials openly available. The current, quantity-based incentive structure in academia clearly stands in the way of implementing many of these practices, resulting in a research literature with sometimes questionable utility and/or integrity. As a solution, we propose a more quality-based reward scheme that explicitly weights published research by its Good Science merits. Scientists need to be increasingly rewarded for doing good work, not just lots of work

    Many analysts, one data set: making transparent how variations in analytic choices affect results

    Get PDF
    Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results

    A review of applications of the Bayes factor in psychological research

    Get PDF
    The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The paper is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines

    Validation of direct and indirect measures of preference for sexualized violence

    No full text
    Individuals differ in the extent to which they are interested in sexualized violence as displayed in the frequent but not ubiquitous sexual interest in consensual acts of violent sexual roleplay and violent pornographic media in the normal population. The present research sought to develop and validate a multi-method asessment battery to measure individual differences in the preference for sexualized violence (PSV). Three indirect measures (Implicit Association Test, Semantic Misattribution Paradigm, Viewing Time) were combined in an online study with 107 men and 103 women. Participants with and without an affiliation with sadomasochistic sexual interest groups were recruited on corresponding internet platforms. Results revealed that all three indirect measures converged in predicting self-reported sexual interest in non-consensual sexuality. Specifically, for men all indirect measures were related to non-consensual sadistic sexual interest, whereas for women an association with masochistic sexual interest was found. Stimulus artefacts versus genuine gender differences are discussed as potential explanations of this dissociation. An outlook on the usability of the assessment battery in applied settings is delivered
    corecore