77,552 research outputs found
Replicating Programs in Social Markets
This paper details the multiple factors that must be taken into account in assessing a programs chances of being successfully replicated, and investigates the various dimensions of replicability -- the program, the process, and the market. The dimensions of replicability represent a systematic method for parsing the opportunity that arises when a program model appears ready for broader implementation
Replicability is not Reproducibility:\ud Nor is it Good Science
At various machine learning conferences, at various times, there have been discussions arising from the inability to replicate the experimental results published in a paper. There seems to be a wide spread view that we need to do something to address this problem, as it is essential to the advancement of our field. The most compelling argument would seem to be that reproducibility of experimental results is the hallmark of science. Therefore, given that most of us regard machine learning as a scientific discipline, being able to replicate experiments is paramount. I want to challenge this view by separating the notion of reproducibility, a generally desirable property, from replicability, its poor cousin. I claim there are important differences between the two. Reproducibility requires changes; replicability avoids them. Although reproducibility is desirable, I contend that the impoverished version, replicability, is one not worth having
Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail
Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code
Is God in the details? A reexamination of the Role of Relegion in Economic
Barro and McCleary (2003) is a key research contribution in the new literature exploring the macroeconomic effects of religious beliefs. This paper represents an effort to evaluate the strength of their claims. We evaluate their results in terms of replicability and robustness. While we find that their analysis meets the standard of statistical replicability, we do not find that the results are robust to changes in their baseline statistical specification. Taken together, we conclude that their analysis cannot be taken to provide useable evidence on how religion might affect aggregate outcomes.
- …
