80 research outputs found
"Learn to p-hack like the pros!"
The replication crisis has hit several scientific fields. The most systematic investigation has been done in psychology, which revealed replication rates less than 40% (Open Science Collaboration, 2015). However, the same problem has been well documented in other disciplines, for example preclinical cancer research or economics. It has been argued that one reason for the high prevalence of false-positive findings is the application of "creative" data analysis techniques that allow to present nearly any noise as significant. Researchers who use such techniques, also called "p-hacking" or "questionable research practices", have higher chances of getting things published. What is the consequence? The answer is clear. Everybody should be equipped with these powerful tools of research enhancement. This talk covers the most commonly applied p-hacking tools, and shows which work best to enhance your research output: "If you torture the data long enough, it will confess!". But be careful: recently developed tools allow the detection of p-hacking. The talk also covers some ideas how to overcome the replication crisis
Ten steps toward a better personality science - How quality may be rewarded more in research evaluation
This target article is part of a theme bundle including open peer commentaries (https://doi.org/10.5964/ps.9227) and a rejoinder by the authors (https://doi.org/10.5964/ps.7961). We point out ten steps that we think will go a long way in improving personality science. The first five steps focus on fostering consensus regarding (1) research goals, (2) terminology, (3) measurement practices, (4) data handling, and (5) the current state of theory and evidence. The other five steps focus on improving the credibility of empirical research, through (6) formal modelling, (7) mandatory pre-registration for confirmatory claims, (8) replication as a routine practice, (9) planning for informative studies (e.g., in terms of statistical power), and (10) making data, analysis scripts, and materials openly available. The current, quantity-based incentive structure in academia clearly stands in the way of implementing many of these practices, resulting in a research literature with sometimes questionable utility and/or integrity. As a solution, we propose a more quality-based reward scheme that explicitly weights published research by its Good Science merits. Scientists need to be increasingly rewarded for doing good work, not just lots of work
A review of applications of the Bayes factor in psychological research
The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The paper is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines
Many analysts, one data set: making transparent how variations in analytic choices affect results
Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results
The predictive power of insomnia symptoms on other aspects of mental health during the COVID-19 pandemic: a longitudinal study
Symptoms of insomnia are an important risk factor for the development of mental disorders, especially during stressful life periods such as the coronavirus disease 2019 (COVID-19) pandemic. However, up to now, most studies have used cross-sectional data, and the prolonged impact of insomnia symptoms during the pandemic on later mental health remains unclear. Therefore, we investigated insomnia symptoms as a predictor of other aspects of mental health across 6 months, with altogether seven assessments (every 30 days, t0-t6), in a community sample (N = 166-267). Results showed no mean-level increase of insomnia symptoms and/or deterioration of mental health between baseline assessment (t0) and the 6- month follow-up (t6). As preregistered, higher insomnia symptoms (between persons) across all time points predicted reduced mental health at the 6-month follow-up. Interestingly, contrary to our hypothesis, higher insomnia symptoms at 1 month, within each person (i.e., compared to that person's symptoms at other time points), predicted improved rather than reduced aspects of mental health 1 month later. Hence, we replicated the predictive effect of averagely increased insomnia symptoms on impaired later mental health during the COVID-19 pandemic. However, we were surprised that increased insomnia symptoms at 1 month predicted aspects of improved mental health 1 month later. This unexpected effect might be specific for our study population and a consequence of our study design. Overall, increased insomnia symptoms may have served as a signal to engage in, and successfully implement, targeted countermeasures, which led to better short-term mental health in this healthy sample
- …