36 research outputs found
Thinking dynamics and individual differences: Mouse-tracking analysis of the denominator neglect task – CORRIGENDUM
Quantifying Support for the Null Hypothesis in Psychology: An Empirical Investigation
In the traditional statistical framework, nonsignificant results leave researchers in a state of suspended disbelief. In this study, we examined, empirically, the treatment and evidential impact of nonsignificant results. Our specific goals were twofold: to explore how psychologists interpret and communicate nonsignificant results and to assess how much these results constitute evidence in favor of the null hypothesis. First, we examined all nonsignificant findings mentioned in the abstracts of the 2015 volumes of Psychonomic Bulletin & Review, Journal of Experimental Psychology: General, and Psychological Science (N = 137). In 72% of these cases, nonsignificant results were misinterpreted, in that the authors inferred that the effect was absent. Second, a Bayes factor reanalysis revealed that fewer than 5% of the nonsignificant findings provided strong evidence (i.e., BF01 > 10) in favor of the null hypothesis over the alternative hypothesis. We recommend that researchers expand their statistical tool kit in order to correctly interpret nonsignificant results and to be able to evaluate the evidence for and against the null hypothesis
Data for "No evidence for automatic imitation in strategic context"
<p>Supplementary data for the outcomes of trials and the reaction times of participants in our experiment.</p
Recommended from our members
Generating and evaluating hypothesis testing strategies
Optimal decision making depends on people’s ability to generate and test hypotheses of their environments. To understand how hypotheses are tested, researchers often focus on uncovering and documenting the typical testing strategies that people spontaneously use (e.g., a positive or confirmatory testing strategy). Under this approach, it is difficult to account for people’s capacity to overcome the limitations of their existing hypothesis testing strategies and/or to invent completely new strategies. Here we sketch a model for how hypothesis testing strategies can themselves be generated and evaluated and discuss its implications for existing models of hypothesis testing
Observing effects in various contexts won’t give us general psychological theories
Generalization does not come from repeatedly observing phenomena in numerous settings, but from theories explaining what is general in those phenomena. Expecting future behavior to look like past observations is especially problematic in psychology, where behaviors change when people’s knowledge changes. Psychology should thus focus on theories of people’s capacity to create and apply new representations of their environments