159 research outputs found
Perceiver Effects as Projective Tests: What Your Perceptions of Others Say about You
In three studies, we document various properties of perceiver effects—or how an individual generally tends to describe other people in a population. First, we document that perceiver effects have consistent relationships with dispositional characteristics of the perceiver, ranging from self-reported personality traits and academic performance to well-being and measures of personality disorders, to how liked the person is by peers. Second, we document that the covariation in perceiver effects among trait dimensions can be adequately captured by a single factor consisting of how positively others are seen across a wide range of traits (e.g., how nice, interesting, trustworthy, happy, and stable others are generally seen). Third, we estimate the one-year stability of perceiver effects and show that individual differences in the typical perception of others have a level of stability comparable to that of personality traits. The results provide compelling evidence that how individuals generally perceive others is a stable individual difference that reveals much about the perceiver’s own personality
Replicability, Robustness, and Reproducibility in Psychological Science
Replication—an important, uncommon, and misunderstood practice—is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress
Author Correction: A consensus-based transparency checklist.
An amendment to this paper has been published and can be accessed via a link at the top of the paper
Investigating the Relationship Between Self-Perceived Moral Superiority and Moral Behavior Using Economic Games
Most people report that they are superior to the average person on various moral traits. The psychological causes and social consequences of this phenomenon have received considerable empirical attention. The behavioral correlates of self-perceived moral superiority, however, remain unknown. We present the results of two preregistered studies (Study 1, N=827; Study 2, N=825) in which we indirectly assessed participants’ self-perceived moral superiority, and used two incentivized economic games to measure their engagement in moral behavior. Across studies, self-perceived moral superiority was unrelated to trust in others and to trustworthiness, as measured by the Trust Game; and unrelated to fairness, as measured by the Dictator Game. This pattern of findings was robust to a range of analyses, and, in both studies, Bayesian analyses indicated moderate support for the null over the alternative hypotheses. We interpret and discuss these findings, and highlight interesting avenues for future research on this topic
A consensus-based transparency checklist
We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository
[Comment] Redefine statistical significance
The lack of reproducibility of scientific studies has caused growing concern over the credibility of claims of new discoveries based on “statistically significant” findings. There has been much progress toward documenting and addressing several causes of this lack of reproducibility (e.g., multiple testing, P-hacking, publication bias, and under-powered studies). However, we believe that a leading cause of non-reproducibility has not yet been adequately addressed: Statistical standards of evidence for claiming discoveries in many fields of science are simply too low. Associating “statistically significant” findings with P < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems.
For fields where the threshold for defining statistical significance is P<0.05, we propose a change to P<0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called “significant” but do not meet the new threshold should instead be called “suggestive.” While statisticians have known the relative weakness of using P≈0.05 as a threshold for discovery and the proposal to lower it to 0.005 is not new (1, 2), a critical mass of researchers now endorse this change.
We restrict our recommendation to claims of discovery of new effects. We do not address the appropriate threshold for confirmatory or contradictory replications of existing claims. We also do not advocate changes to discovery thresholds in fields that have already adopted more stringent standards (e.g., genomics and high-energy physics research; see Potential Objections below).
We also restrict our recommendation to studies that conduct null hypothesis significance tests. We have diverse views about how best to improve reproducibility, and many of us believe that other ways of summarizing the data, such as Bayes factors or other posterior summaries based on clearly articulated model assumptions, are preferable to P-values. However, changing the P-value threshold is simple and might quickly achieve broad acceptance
Registered Replication Report: Dijksterhuis and van Knippenberg (1998)
Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence ("professor") subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence ("soccer hooligans"). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%-3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the "professor" category and those primed with the "hooligan" category (0.14%) and no moderation by gender
- …