25 research outputs found

    Creative destruction in science

    Get PDF
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. Significance statement: It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void— reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement: The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.</p

    Experimenter as automaton; experimenter as human:Exploring the position of the researcher in scientific research

    Get PDF
    The crisis of confidence in the social sciences has many corollaries which impact our research practices. One of these is a push towards maximal and mechanical objectivity in quantitative research. This stance is reinforced by major journals and academic institutions that subtly yet certainly link objectivity with integrity and rigor. The converse implication of this may be an association between subjectivity and low quality. Subjectivity is one of qualitative methodology's best assets, however. In qualitative methodology, that subjectivity is often given voice through reflexivity. It is used to better understand our own role within the research process, and is a means through which the researcher may oversee how they influence their research. Given that the actions of researchers have led to the poor reproducibility characterising the crisis of confidence, it is worthwhile to consider whether reflexivity can help improve the validity of research findings in quantitative psychology. In this report, we describe a combination approach of research: the data of a series of interviews helps us elucidate the link between reflexive practice and quality of research, through the eyes of practicing academics. Through our exploration of the position of the researcher in their research, we shed light on how the reflections of the researcher can impact the quality of their research findings, in the context of the current crisis of confidence. The validity of these findings is tempered, however, by limitations to the sample, and we advise caution on the part of our audience in their reading of our conclusions.</p

    When and Why to Replicate:As Easy as 1, 2, 3?

    Get PDF
    The crisis of confidence in psychology has prompted vigorous and persistent debate in the scientific community concerning the veracity of the findings of psychological experiments. This discussion has led to changes in psychology's approach to research, and several new initiatives have been developed, many with the aim of improving our findings. One key advancement is the marked increase in the number of replication studies conducted. We argue that while it is important to conduct replications as part of regular research protocol, it is neither efficient nor useful to replicate results at random. We recommend adopting a methodical approach toward the selection of replication targets to maximize the impact of the outcomes of those replications, and minimize waste of scarce resources. In the current study, we demonstrate how a Bayesian re-analysis of existing research findings followed by a simple qualitative assessment process can drive the selection of the best candidate article for replication.</p

    The Effect of Preregistration on Trust in Empirical Research Findings:Results of a Registered Report

    Get PDF
    The crisis of confidence has undermined the trust that researchers place in the findings of their peers. In order to increase trust in research, initiatives such as preregistration have been suggested, which aim to prevent various questionable research practices. As it stands, however, no empirical evidence exists that preregistration does increase perceptions of trust. The picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. This registered report presents an empirical assessment of the extent to which preregistration increases the trust of 209 active academics in the reported outcomes, and how familiarity with another researcher influences that trust. Contrary to our expectations, we report ambiguous Bayes factors and conclude that we do not have strong evidence towards answering our research questions. Our findings are presented along with evidence that our manipulations were ineffective for many participants, leading to the exclusion of 68% of complete datasets, and an underpowered design as a consequence. We discuss other limitations and confounds which may explain why the findings of the study deviate from a previously conducted pilot study. We reflect on the benefits of using the registered report submission format in light of our results. The OSF page for this registered report and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75

    Weekly reports for R.V. Polarstern expedition PS103 (2016-12-16 - 2017-02-03, Cape Town - Punta Arenas), German and English version

    Get PDF
    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1(N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMOsys model

    The Process of Replication Target Selection in Psychology: What to Consider?

    Get PDF
    Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well justified, systematic, and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications, and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target

    The process of replication target selection in psychology: what to consider?

    Get PDF
    Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well-justified, systematic and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target. The resulting checklist can be used for transparently communicating the rationale for selecting studies for replication

    Creative destruction in science

    Get PDF
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents\u2019 reasoning about day care options, and gender discrimination in hiring decisions. Significance statement It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void\u2014 reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article

    Exploring the dimensions of responsible research systems and cultures:A scoping review

    Get PDF
    The responsible conduct of research is foundational to the production of valid and trustworthy research. Despite this, our grasp of what dimensions responsible conduct of research (RCR) might contain - and how it differs across disciplines (i.e. how it is conceptualized and operationalized) - is tenuous. Moreover, many initiatives related to developing and maintaining RCR are developed within disciplinary and institutional silos which naturally limits the benefits that RCR practice can have. To this end, we are working to develop a better understanding of how RCR is conceived and realized, both across disciplines and across institutions in Europe. The first step in doing this is to scope existing knowledge on the topic, of which this scoping review is a part. We searched several electronic databases for relevant published and grey literature. An initial sample of 715 articles was identified, with 75 articles included in the final sample for qualitative analysis. We find several dimensions of RCR that are underemphasized or are excluded from the well-established World Conferences on Research Integrity (WCRI) Singapore Statement on Research Integrity and explore facets of these dimensions that find special relevance in a range of research disciplines.</p
    corecore