4 research outputs found
Raising the value of research studies in psychological science by increasing the credibility of research reports: the transparent Psi Project
The low reproducibility rate in social sciences has produced hesitation among researchers in accepting published findings at their face value. Despite the advent of initiatives to increase transparency in research reporting, the field is still lacking tools to verify the credibility of research reports. In the present paper, we describe methodologies that let researchers craft highly credible research and allow their peers to verify this credibility. We demonstrate the application of these methods in a multi-lab replication of Bemâs Experiment 1 (2011) on extrasensory perception (ESP), which was co-designed by a consensus panel including both proponents and opponents of Bemâs original hypothesis. In the study we applied direct data deposition in combination with born-open data and real-time research reports to extend transparency to protocol delivery and data collection. We also used piloting, checklists, laboratory logs and video documented trial sessions to ascertain as-intended protocol delivery, and external research auditors to monitor research integrity. We found 49.89% successful guesses, while Bem reported 53.07% success rate, with the chance level being 50%. Thus, Bemâs findings were not replicated in our study. In the paper we discuss the implementation, feasibility, and perceived usefulness of the credibility-enhancing methodologies used throughout the project
Raising the value of research studies in psychological science by increasing the credibility of research reports: The Transparent Psi Project - Preprint
The low reproducibility rate in social sciences has produced hesitation among researchers in accepting published findings at their face value. Despite the advent of initiatives to increase transparency in research reporting, the field is still lacking tools to verify the credibility of research reports. In the present paper, we describe methodologies that let researchers craft highly credible research and allow their peers to verify this credibility. We demonstrate the application of these methods in a multi-lab replication of Bemâs Experiment 1 (1) on extrasensory perception (ESP), which was co-designed by a consensus panel including both proponents and opponents of Bemâs original hypothesis. In the study we applied direct data deposition in combination with born-open data and real-time research reports to extend transparency to protocol delivery and data collection. We also used piloting, checklists, laboratory logs and video documented trial sessions to ascertain as-intended protocol delivery, and external research auditors to monitor research integrity. We found 49.89% successful guesses, while Bem reported 53.07% success rate, with the chance level being 50%. Thus, Bemâs findings were not replicated in our study. In the paper we discuss the implementation, feasibility, and perceived usefulness of the credibility-enhancing methodologies used throughout the project.
Plain word summary:
This project aimed to demonstrate the use of research methods designed to improve the reliability of scientific findings in psychological science. Using this rigorous methodology, we could not replicate the positive findings of Bemâs 2011 Experiment 1. This finding does not confirm, nor contradict the existence of ESP in general, and this was not the point of our study. Instead, the results tell us that (1) the original experiment was likely affected by methodological flaws or it was a chance finding, and (2) the paradigm used in the original study is probably not useful for detecting ESP effects if they exist. The methodological innovations implemented in this study enable the readers to trust and verify our results which is an important step forward in achieving trustworthy science
Investigating Object Orientation Effects Across 18 Languages
Mental simulation theories of language comprehension propose that people automatically create mental representations of objects mentioned in sentences. Mental representation is often measured with the sentence-picture verification task, wherein participants first read a sentence that implies the object property (i.e., shape and orientation). Participants then respond to an image of an object by indicating whether it was an object from the sentence or not. Previous studies have shown matching advantages for shape, but findings concerning object orientation have not been robust across languages. This registered report investigated the match advantage of object orientation across 18 languages in nearly 4,000 participants. The preregistered analysis revealed no compelling evidence for a match advantage across languages. Additionally, the match advantage was not predicted by mental rotation scores. Overall, the results did not support current mental simulation theories