30 research outputs found

    A decade of theory as reflected in Psychological Science (2009–2019)

    Get PDF
    The dominant belief is that science progresses by testing theories and moving towards theoretical consensus. While it’s implicitly assumed that psychology operates in this manner, critical discussions claim that the field suffers from a lack of cumulative theory. To examine this paradox, we analysed research published in Psychological Science from 2009–2019 (N = 2,225). We found mention of 359 theories in-text, most were referred to only once. Only 53.66% of all manuscripts included the word theory, and only 15.33% explicitly claimed to test predictions derived from theories. We interpret this to suggest that the majority of research published in this flagship journal is not driven by theory, nor can it be contributing to cumulative theory building. These data provide insight into the kinds of research psychologists are conducting and raises questions about the role of theory in the psychological sciences

    We don't know what you did last summer. On the importance of transparent reporting of reaction time data pre-processing

    Get PDF
    In behavioral, cognitive, and social sciences, reaction time measures are an important source of information. However, analyses on reaction time data are affected by researchers' analytical choices and the order in which these choices are applied. The results of a systematic literature review, presented in this paper, revealed that the justification for and order in which analytical choices are conducted are rarely reported, leading to difficulty in reproducing results and interpreting mixed findings. To address this methodological shortcoming, we created a checklist on reporting reaction time pre-processing to make these decisions more explicit, improve transparency, and thus, promote best practices within the field. The importance of the pre-processing checklist was additionally supported by an expert consensus survey and a multiverse analysis. Consequently, we appeal for maximal transparency on all methods applied and offer a checklist to improve replicability and reproducibility of studies that use reaction time measures

    Creative destruction in science

    Get PDF
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents\u2019 reasoning about day care options, and gender discrimination in hiring decisions. Significance statement It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void\u2014 reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article

    The Psychological Science Accelerator's COVID-19 rapid-response dataset

    Get PDF

    The psychological science accelerator’s COVID-19 rapid-response dataset

    Get PDF
    In response to the COVID-19 pandemic, the Psychological Science Accelerator coordinated three large-scale psychological studies to examine the effects of loss-gain framing, cognitive reappraisals, and autonomy framing manipulations on behavioral intentions and affective measures. The data collected (April to October 2020) included specific measures for each experimental study, a general questionnaire examining health prevention behaviors and COVID-19 experience, geographical and cultural context characterization, and demographic information for each participant. Each participant started the study with the same general questions and then was randomized to complete either one longer experiment or two shorter experiments. Data were provided by 73,223 participants with varying completion rates. Participants completed the survey from 111 geopolitical regions in 44 unique languages/dialects. The anonymized dataset described here is provided in both raw and processed formats to facilitate re-use and further analyses. The dataset offers secondary analytic opportunities to explore coping, framing, and self-determination across a diverse, global sample obtained at the onset of the COVID-19 pandemic, which can be merged with other time-sampled or geographic data

    A global experiment on motivating social distancing during the COVID-19 pandemic

    Get PDF
    Finding communication strategies that effectively motivate social distancing continues to be a global public health priority during the COVID-19 pandemic. This cross-country, preregistered experiment (n = 25,718 from 89 countries) tested hypotheses concerning generalizable positive and negative outcomes of social distancing messages that promoted personal agency and reflective choices (i.e., an autonomy-supportive message) or were restrictive and shaming (i.e., a controlling message) compared with no message at all. Results partially supported experimental hypotheses in that the controlling message increased controlled motivation (a poorly internalized form of motivation relying on shame, guilt, and fear of social consequences) relative to no message. On the other hand, the autonomy-supportive message lowered feelings of defiance compared with the controlling message, but the controlling message did not differ from receiving no message at all. Unexpectedly, messages did not influence autonomous motivation (a highly internalized form of motivation relying on one’s core values) or behavioral intentions. Results supported hypothesized associations between people’s existing autonomous and controlled motivations and self-reported behavioral intentions to engage in social distancing. Controlled motivation was associated with more defiance and less long-term behavioral intention to engage in social distancing, whereas autonomous motivation was associated with less defiance and more short- and long-term intentions to social distance. Overall, this work highlights the potential harm of using shaming and pressuring language in public health communication, with implications for the current and future global health challenges

    Primbs, Maximilian A.

    No full text

    Are Small Effects the Indispensable Foundation for a Cumulative Psychological Science? A Reply to Götz et al. (2022)

    No full text
    In the January 2022 issue of Perspectives, Götz et al. argued that small effects are “the indispensable foundation for a cumulative psychological science.” They supported their argument by claiming that (a) psychology, like genetics, consists of complex phenomena explained by additive small effects; (b) psychological-research culture rewards large effects, which means small effects are being ignored; and (c) small effects become meaningful at scale and over time. We rebut these claims with three objections: First, the analogy between genetics and psychology is misleading; second, p values are the main currency for publication in psychology, meaning that any biases in the literature are (currently) caused by pressure to publish statistically significant results and not large effects; and third, claims regarding small effects as important and consequential must be supported by empirical evidence or, at least, a falsifiable line of reasoning. If accepted uncritically, we believe the arguments of Götz et al. could be used as a blanket justification for the importance of any and all “small” effects, thereby undermining best practices in effect-size interpretation. We end with guidance on evaluating effect sizes in relative, not absolute, terms

    We Don’t Know What You Did Last Summer. On the Importance of Transparent Reporting of Reaction Time Data Pre-processing

    No full text
      In behavioral, cognitive, and social sciences, reaction time measures are an important source of information. However, analyses on reaction time data are affected by researchers' analytical choices and the order in which these choices are applied. The results of a systematic literature review, presented in this paper, revealed that the justification for and order in which analytical choices are conducted are rarely reported, leading to difficulty in reproducing results and interpreting mixed findings. To address this methodological shortcoming, we created a checklist on reporting reaction time pre-processing to make these decisions more explicit, improve transparency, and thus, promote best practices within the field. The importance of the pre-processing checklist was additionally supported by an expert consensus survey and a multiverse analysis. Consequently, we appeal for maximal transparency on all methods applied and offer a checklist to improve replicability and reproducibility of studies that use reaction time measures.  </p
    corecore