22 research outputs found

    Creative destruction in science

    Get PDF
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents\u2019 reasoning about day care options, and gender discrimination in hiring decisions. Significance statement It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void\u2014 reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article

    Crowdsourcing hypothesis tests: Making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div

    Learning in settings with partial feedback and the wavy recency effect of rare events

    No full text
    Analyses of human learning reveal a discrepancy between the long- and the short-term effects of outcomes on subsequent choice. The long-term effect is simple: favorable outcomes increase the choice rate of an alternative whereas unfavorable outcomes decrease it. The short-term effects are more complex. Favorable outcomes can decrease the choice rate of the best option. This pattern violates the positive recency assumption that underlies the popular models of learning. The current research tries to clarify the implications of these results. Analysis of wide sets of learning experiments shows that rare positive outcomes have a wavy recency effect. The probability of risky choice after a successful outcome from risk-taking at trial t is initially (at t + 1) relatively high, falls to a minimum at t + 2, then increases for about 15 trials, and then decreases again. Rare negative outcomes trigger a wavy reaction when the feedback is complete, but not under partial feedback. The difference between the effects of rare positive and rare negative outcomes and between full and partial feedback settings can be described as a reflection of an interaction of an effort to discover patterns with two other features of human learning: surprise-triggers-change and the hot stove effect. A similarity-based descriptive model is shown to capture well all these interacting phenomena. In addition, the model outperforms the leading models in capturing the outcomes of data used in the 2010 Technion Prediction Tournament

    To predict human choice, consider the context

    No full text
    Choice prediction competitions suggest that popular models of choice, including prospect theory, have low predictive accuracy. Peterson et al. show the key problem lies in assuming each alternative is evaluated in isolation, independently of the context. This observation demonstrates how a focus on predictions can promote understanding of cognitive processes

    From anomalies to forecasts : toward a descriptive model of decisions under risk, under ambiguity, and from experience

    Get PDF
    Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values
    corecore