614 research outputs found

    Reproducibility: A Researcher-Centered Definition

    Get PDF
    Recent years have introduced major shifts in scientific reporting and publishing. The scientific community, publishers, funding agencies, and the public expect research to adhere to principles of openness, reproducibility, replicability, and repeatability. However, studies have shown that scientists often have neither the right tools nor suitable support at their disposal to meet these modern science challenges. In fact, even the concrete expectations connected to these terms may be unclear and subject to field-specific, organizational, and personal interpretations. Based on a narrative literature review of work that defines characteristics of open science, reproducibility, replicability, and repeatability, as well as a review of recent work on researcher-centered requirements, we find that the bottom-up practices and needs of researchers contrast top-down expectations encoded in terms related to reproducibility and open science. We identify and define reproducibility as a central term that concerns the ease of access to scientific resources, as well as their completeness, to the degree required for efficiently and effectively interacting with scientific work. We hope that this characterization helps to create a mutual understanding across science stakeholders, in turn paving the way for suitable and stimulating environments, fit to address the challenges of modern science reporting and publishing

    Demo: Simulation-as-a-Service to Benchmark Opportunistic Networks

    Get PDF
    Repeatability, reproducibility, and replicability are essential aspects of experimental and simulation-driven research. Use of benchmarks in such evaluations further assists corroborative performance evaluations. In this work, we present a demonstrator of a simulation service, called ”OPS on the bench” which tackles these challenges in performance evaluations of opportunistic networks

    Assessing the quality of land system models: moving from valibration to evaludation

    Get PDF
    Reviews suggest that evaluation of land system models is largely inadequate, with undue reliance on a vague concept of validation. Efforts to improve and standardise evaluation practices have so far had limited effect. In this article we examine the issues surrounding land system model evaluation and consider the relevance of the TRACE framework for environmental model documentation. In doing so, we discuss the application of a comprehensive range of evaluation procedures to existing models, and the value of each specific procedure. We develop a tiered checklist for going beyond what seems to be a common practice of ‘valibration’ (the repeated variation of model parameter values to achieve agreement with data) to achieving ‘evaludation’ (the rigorous, broad-based assessment of model quality and validity). We propose the Land Use Change – TRACE (LUC-TRACE) model evaludation protocol and argue that engagement with a comprehensive protocol of this kind (even if not this particular one) is valuable in ensuring that land system model results are interpreted appropriately. We also suggest that the main benefit of such formalised structures is to assist the process of critical thinking about model utility, and that the variety of legitimate modelling approaches precludes universal tests of whether a model is ‘valid’. Evaludation is therefore a detailed and subjective process requiring the sustained intellectual engagement of model developers and users

    Considering the Development Workflow to Achieve Reproducibility with Variation

    Get PDF
    International audienceThe ability to reproduce an experiment is fundamental in computer science. Existing approaches focus on repeatability, but this is only the first step to repro-ducibility: Continuing a scientific work from a previous experiment requires to be able to modify it. This ability is called reproducibility with Variation. In this contribution, we show that capturing the environment of execution is necessary but not sufficient ; we also need the environment of development. The variation also implies that those environments are subject to evolution, so the whole software development lifecycle needs to be considered. To take into account these evolutions, software environments need to be clearly defined, reconstructible with variation, and easy to share. We propose to leverage functional package managers to achieve this goal

    Practicing a Science of Security: A Philosophy of Science Perspective

    Get PDF
    Our goal is to refocus the question about cybersecurity research from 'is this process scientific' to 'why is this scientific process producing unsatisfactory results'. We focus on five common complaints that claim cybersecurity is not or cannot be scientific. Many of these complaints presume views associated with the philosophical school known as Logical Empiricism that more recent scholarship has largely modified or rejected. Modern philosophy of science, supported by mathematical modeling methods, provides constructive resources to mitigate all purported challenges to a science of security. Therefore, we argue the community currently practices a science of cybersecurity. A philosophy of science perspective suggests the following form of practice: structured observation to seek intelligible explanations of phenomena, evaluating explanations in many ways, with specialized fields (including engineering and forensics) constraining explanations within their own expertise, inter-translating where necessary. A natural question to pursue in future work is how collecting, evaluating, and analyzing evidence for such explanations is different in security than other sciences
    • 

    corecore