1,415 research outputs found

    How to Create an Innovation Accelerator

    Full text link
    Too many policy failures are fundamentally failures of knowledge. This has become particularly apparent during the recent financial and economic crisis, which is questioning the validity of mainstream scholarly paradigms. We propose to pursue a multi-disciplinary approach and to establish new institutional settings which remove or reduce obstacles impeding efficient knowledge creation. We provided suggestions on (i) how to modernize and improve the academic publication system, and (ii) how to support scientific coordination, communication, and co-creation in large-scale multi-disciplinary projects. Both constitute important elements of what we envision to be a novel ICT infrastructure called "Innovation Accelerator" or "Knowledge Accelerator".Comment: 32 pages, Visioneer White Paper, see http://www.visioneer.ethz.c

    Improving Science That Uses Code

    Get PDF
    As code is now an inextricable part of science it should be supported by competent Software Engineering, analogously to statistical claims being properly supported by competent statistics.If and when code avoids adequate scrutiny, science becomes unreliable and unverifiable because results — text, data, graphs, images, etc — depend on untrustworthy code.Currently, scientists rarely assure the quality of the code they rely on, and rarely make it accessible for scrutiny. Even when available, scientists rarely provide adequate documentation to understand or use it reliably.This paper proposes and justifies ways to improve science using code:1. Professional Software Engineers can help, particularly in critical fields such as public health, climate change and energy.2. ‘Software Engineering Boards,’ analogous to Ethics or Institutional Review Boards, should be instigated and used.3. The Reproducible Analytic Pipeline (RAP) methodology can be generalized to cover code and Software Engineering methodologies, in a generalization this paper introduces called RAP+. RAP+ (or comparable interventions) could be supported and or even required in journal, conference and funding body policies.The paper’s Supplemental Material provides a summary of Software Engineering best practice relevant to scientific research, including further suggestions for RAP+ workflows.‘Science is what we understand well enough to explain to a computer.’ Donald E. Knuth in A=B [ 1]‘I have to write to discover what I am doing.’ Flannery O’Connor, quoted in Write for your life [ 2]‘Criticism is the mother of methodology.’ Robert P. Abelson in Statistics as Principled Argument [ 3]‘From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue.’ Editorial in Nature [4

    Enhancing rigor in quantitative entrepreneurship research

    Get PDF
    MAULAN MUKAAN OA-ARTIKKELI. AVAA KUN ILMESTYY VIRALLISESTIReflecting on common empirical concerns in quantitative entrepreneurship research, recent calls for improved rigor and reproducibility in social science research, and recent methodological developments, we discuss new opportunities for further enhancing rigor in quantitative entrepreneurship research. In addition to highlighting common key concerns of editors and reviewers, we review recent methodological guidelines in the social sciences that offer more in-depth discussions of particular empirical issues and approaches. We conclude by offering a set of best practice recommendations for further enhancing rigor in quantitative entrepreneurship research.Peer reviewe

    Testing and being tested in pandemic times

    Get PDF
    The coronavirus pandemic is witness to a great proliferation of two types of tests. The first type is testing – new medical diagnostic tests as well as epidemiological models that simulate and project the course of the virus. In the second type, actors, organizations, and institutions are being tested in this moment of social and political crisis. This essay analyzes the similarities and differences between these two major types of tests in order to understand their entanglements in the crisis. In the process, we find a great diversity of tests operating in multiple registers, themselves not clearly demarcated, often combining and sometimes conflating, for example, scientific and public discourse. The study opens by identifying three aspects of testing, drawn from the sociology of testing. First, tests are frequently proxies (or projections) that stand for something. Second, a test is a critical moment that stands out – whether because it is a moment deliberately separated out or because it is a puzzling or troublesome “situation” that disrupts the flow of social life. Third, when someone or something is put to the test, of interest is whether it stands up to the challenge. These insights serve as the building blocks for addressing three major issues – representation, selection, and accountability – regarding testing in the time of the coronavirus crisis
    corecore