46 research outputs found

    "El pez de oro", de Gamaliel Churata, en la tradiciĂłn de la literatura peruana

    Get PDF
    Este trabajo pretende ser una reflexiĂłn sobre la obra poco conocida de Gamaliel Churata (Arequipa, 1897-Lima, 1969), especialmente sobre El Pez de Oro (1957), que para algunos es «la Biblia del indigenismo» y para otros «uno de los grandes retos no asumidos de la crĂ­tica peruana», si bien el reto se estĂĄ asumiendo en los Ășltimos años en una serie de trabajos crĂ­ticos que mencionamos a lo largo de estas pĂĄginas. Para evaluar esta obra, hay que observar la evoluciĂłn de Churata desde la Ă©poca del BoletĂ­n Titikaka (1926-30), Ăłrgano difusor del movimiento Orkopata que Ă©l mismo dirige desde 1925, hasta la publicaciĂłn de El Pez de Oro en 1957. Este movimiento ha sido objeto de varios estudios, el mĂĄs completo de los cuales es el de Vich; pero es necesario distinguir a partir de Ă©l la evoluciĂłn de un proyecto literario que se plasma en El Pez de Oro y que gira sobre uno de los ejes mĂĄs importantes de la literatura peruana, la heterogeneidad de sus materiales literario

    One swallow doesn’t make a summer: reply to Kataria

    Get PDF
    In this paper we reply to Mitesh Kataria’s comment, which criticized the simulations of Maniadis, Tufano, and List (2014, Am. Econ. Rev.104(1), 277-290). We view these simulations as a means to illustrating the fact that we economists are unaware of the value of key variables that determine the credibility of our own empirical findings. Such variables include priors (i.e., the pre-study probability that a tested phenomenon is true) and the statistical power of the empirical design. Economists should not hesitate to use Bayesian tools and meta-analysis in order to quantify what we know about these variables

    One swallow doesn’t make a summer: reply to Kataria

    Get PDF
    In this paper we reply to Mitesh Kataria’s comment, which criticized the simulations of Maniadis, Tufano, and List (2014, Am. Econ. Rev.104(1), 277-290). We view these simulations as a means to illustrating the fact that we economists are unaware of the value of key variables that determine the credibility of our own empirical findings. Such variables include priors (i.e., the pre-study probability that a tested phenomenon is true) and the statistical power of the empirical design. Economists should not hesitate to use Bayesian tools and meta-analysis in order to quantify what we know about these variables

    On the robustness of anchoring effects in WTP and WTA experiments

    Get PDF
    We reexamine the effects of the anchoring manipulation of Ariely, Loewenstein, and Prelec (2003) on the evaluation of common market goods and find very weak anchoring effects. We perform the same manipulation on the evaluation of binary lotteries, and find no anchoring effects at all. This suggests limits on the robustness of anchoring effects

    The research reproducibility crisis and economics of science

    Get PDF
    We provide a brief summary of two areas where cross-fertilization across economics and other disciplines is likely to have far-reaching benefits. The increasing concern about research reproducibility entails that economic design has much to contribute to the discussion of possible reforms in science, while the empirical discipline of meta-research can inform practices to assess the validity of the economics literature. A mutual investment in investigating possible synergies may be costly but could benefit the scientific endeavour as a whole

    To replicate or not to replicate?: exploring reproducibility in economics through the lens of a model and a pilot study

    Get PDF
    The sciences are in an era of an alleged ‘credibility crisis’. In this study, we discuss the reproducibility of empirical results, focusing on economics research. By combining theory and empirical evidence, we discuss the import of replication studies, and whether they improve our confidence in novel findings. The theory sheds light on the importance of replications, even when replications are subject to bias. We then present a pilot meta-study of replication in experimental economics, a subfield serving as a positive benchmark for investigating the credibility of economics. Our meta-study highlights certain difficulties when applying meta-research to systematise the economics literature

    When is evidence actionable? Assessing whether a program is ready to scale

    No full text
    The effects of small-scale interventions often prove much lower than expected when they are implemented at a large scale. We illustrate the problem and its potential causes using a number of examples from the early childhood intervention literature. We delve deeper by introducing a basic logical framework allowing us to discuss the key factors in assessing whether a program is ready to scale, particularly with regards to uncertainty in the potential outcomes of small-scale interventions. We conclude putting forward a set of concrete recommendations on how to bridge the science of using science and real-life policy
    corecore