151 research outputs found

    Preregistration templates as a new addition to the evidence-based toxicology toolbox

    Get PDF
    In this editorial, we define the practice of “preregistration” of research and describe its motivations, explain why we believe preregistration templates should make preregistration more effective as an intervention for improving the quality of scientific research, and introduce Evidence-Based Toxicology’s Preregistration Templates Special Issue

    The effects of temperature on prosocial and antisocial behaviour: A review and meta-analysis

    Get PDF
    Research from the social sciences suggests an association between higher temperatures and increases in antisocial behaviours, including aggressive, violent, or sabotaging behaviours, and represents a heat-facilitates-aggression perspective. More recently, studies have shown that higher temperature experiences may also be linked to increases in prosocial behaviours, such as altruistic, sharing, or cooperative behaviours, representing a warmth-primes-prosociality view. However, across both literatures, there have been inconsistent findings and failures to replicate key theoretical predictions, leaving the status of temperature-behaviour links unclear. Here we review the literature and conduct meta-analyses of available empirical studies that have either prosocial (e.g., monetary reward, gift giving, helping behaviour) or antisocial (self-rewarding, retaliation, sabotaging behaviour) behavioural outcome variables, with temperature as an independent variable. In an omnibus multivariate analysis (total N = 4577) with 80 effect sizes, we found that there was no reliable effect of temperature on the behavioural outcomes measured. Further, we find little support for either the warmth-primes-prosociality view or the heat- facilitates-aggression view. There were no reliable effects if we consider separately the type of behavioural outcome (prosocial or antisocial), different types of temperature experience (haptic or ambient), or potential interactions with the experimental social context (positive, neutral or negative). We discuss how these findings affect the status of existing theoretical perspectives, and provide specific suggestions advancing research in this area

    Replication of “Experiencing physical warmth promotes interpersonal warmth” by Williams & Bargh (2008)

    Get PDF
    We report the results of three high-powered, independent replications of Study 2 from Williams and Bargh (2008). Participants evaluated hot or cold instant therapeutic packs before choosing a reward for participation that was framed as a prosocial (i.e., treat for a friend) or self-interested reward (i.e., treat for the self). Williams and Bargh predicted that evaluating the hot pack would lead to a higher probability of making a prosocial choice compared to evaluating the cold pack. We did not replicate the effect in any individual laboratory or when considering the results of the three replications together (total N = 861). We conclude that there is no evidence that brief exposure to warm therapeutic packs induces greater prosocial responding than exposure to cold therapeutic packs

    Replicability, Robustness, and Reproducibility in Psychological Science

    Get PDF
    Replication—an important, uncommon, and misunderstood practice—is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress

    Examining the reproducibility of meta-analyses in psychology:A preliminary report

    Get PDF
    Meta-analyses are an important tool to evaluate the literature. It is essential that meta-analyses can easily be reproduced to allow researchers to evaluate the impact of subjective choices on meta-analytic effect sizes, but also to update meta-analyses as new data comes in, or as novel statistical techniques (for example to correct for publication bias) are developed. Research in medicine has revealed meta-analyses often cannot be reproduced. In this project, we examined the reproducibility of meta-analyses in psychology by reproducing twenty published meta-analyses. Reproducing published meta-analyses was surprisingly difficult. 96% of meta-analyses published in 2013-2014 did not adhere to reporting guidelines. A third of these meta-analyses did not contain a table specifying all individual effect sizes. Five of the 20 randomly selected meta-analyses we attempted to reproduce could not be reproduced at all due to lack of access to raw data, no details about the effect sizes extracted from each study, or a lack of information about how effect sizes were coded. In the remaining meta-analyses, differences between the reported and reproduced effect size or sample size were common. We discuss a range of possible improvements, such as more clearly indicating which data were used to calculate an effect size, specifying all individual effect sizes, adding detailed information about equations that are used, and how multiple effect size estimates from the same study are combined, but also sharing raw data retrieved from original authors, or unpublished research reports. This project clearly illustrates there is a lot of room for improvement when it comes to the transparency and reproducibility of published meta-analyses

    A Guide for Social Science Journal Editors on Easing into Open Science

    Get PDF
    Journal editors have a large amount of power to advance open science in their respective fields by incentivising and mandating open policies and practices at their journals. The Data PASS Journal Editors Discussion Interface (JEDI, an online community for social science journal editors: www.dpjedi.org) has collated several resources on embedding open science in journal editing (www.dpjedi.org/resources). However, it can be overwhelming as an editor new to open science practices to know where to start. For this reason, we created a guide for journal editors on how to get started with open science. The guide outlines steps that editors can take to implement open policies and practices within their journal, and goes through the what, why, how, and worries of each policy and practice. This manuscript introduces and summarizes the guide (full guide: https://osf.io/hstcx).<br/

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50).Additional co-authors: Ivan Ropovik, Balazs Aczel, Lena F. Aeschbach, Luca Andrighetto, Jack D. Arnal, Holly Arrow, Peter Babincak, Bence E. Bakos, Gabriel BanĂ­k, Ernest Baskin, Radomir Belopavlovic, Michael H. Bernstein, MichaƂ BiaƂek, Nicholas G. Bloxsom, Bojana BodroĆŸa, Diane B. V. Bonfiglio, Leanne Boucher, Florian BrĂŒhlmann, Claudia C. Brumbaugh, Erica Casini, Yiling Chen, Carlo Chiorri, William J. Chopik, Oliver Christ, Antonia M. Ciunci, Heather M. Claypool, Sean Coary, Marija V. Cˇolic, W. Matthew Collins, Paul G. Curran, Chris R. Day, Anna Dreber, John E. Edlund, Filipe FalcĂŁo, Anna Fedor, Lily Feinberg, Ian R. Ferguson, MĂĄire Ford, Michael C. Frank, Emily Fryberger, Alexander Garinther, Katarzyna Gawryluk, Kayla Ashbaugh, Mauro Giacomantonio, Steffen R. Giessner, Jon E. Grahe, Rosanna E. Guadagno, Ewa HaƂasa, Rias A. Hilliard, Joachim HĂŒffmeier, Sean Hughes, Katarzyna Idzikowska, Michael Inzlicht, Alan Jern, William JimĂ©nez-Leal, Magnus Johannesson, Jennifer A. Joy-Gaba, Mathias Kauff, Danielle J. Kellier, Grecia Kessinger, Mallory C. Kidwell, Amanda M. Kimbrough, Josiah P. J. King, Vanessa S. Kolb, Sabina KoƂodziej, Marton Kovacs, Karolina Krasuska, Sue Kraus, Lacy E. Krueger, Katarzyna Kuchno, Caio Ambrosio Lage, Eleanor V. Langford, Carmel A. Levitan, Tiago JessĂ© Souza de Lima, Hause Lin, Samuel Lins, Jia E. Loy, Dylan Manfredi, Ɓukasz Markiewicz, Madhavi Menon, Brett Mercier, Mitchell Metzger, Venus Meyet, Jeremy K. Miller, Andres Montealegre, Don A. Moore, RafaƂ Muda, Gideon Nave, Austin Lee Nichols, Sarah A. Novak, Christian Nunnally, Ana Orlic, Anna Palinkas, Angelo Panno, Kimberly P. Parks, Ivana Pedovic, Emilian Pekala, Matthew R. Penner, Sebastiaan Pessers, Boban Petrovic, Thomas Pfeiffer, Damian Pienkosz, Emanuele Preti, Danka Puric, Tiago Ramos, Jonathan Ravid, Timothy S. Razza, Katrin Rentzsch, Juliette Richetin, Sean C. Rife, Anna Dalla Rosa, Kaylis Hase Rudy, Janos Salamon, Blair Saunders, PrzemysƂaw Sawicki, Kathleen Schmidt, Kurt Schuepfer, Thomas Schultze, Stefan Schulz-Hardt, Astrid SchĂŒtz, Ani N. Shabazian, Rachel L. Shubella, Adam Siegel, RĂșben Silva, Barbara Sioma, Lauren Skorb, Luana Elayne Cunha de Souza, Sara Steegen, L. A. R. Stein, R. Weylin Sternglanz, Darko Stojilovic, Daniel Storage, Gavin Brent Sullivan, Barnabas Szaszi, Peter Szecsi, Orsolya Szöke, Attila Szuts, Manuela Thomae, Natasha D. Tidwell, Carly Tocco, Ann-Kathrin Torka, Francis Tuerlinckx, Wolf Vanpaemel, Leigh Ann Vaughn, Michelangelo Vianello, Domenico Viganola, Maria Vlachou, Ryan J. Walker, Sophia C. Weissgerber, Aaron L. Wichman, Bradford J. Wiggins, Daniel Wolf, Michael J. Wood, David Zealley, Iris ĆœeĆŸelj, Mark Zrubka, and Brian A. Nose
    • 

    corecore