21 research outputs found

    Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail

    Get PDF
    Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code

    Replicability or reproducibility? : on the replication crisis in computational neuroscience and sharing only relevant detail

    Get PDF
    Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code

    Mapping and Describing Geospatial Data to Generalize Complex Models: The Case of LittoSIM-GEN

    Get PDF
    For some scientific questions, empirical data are essential to develop reliable simulation models. These data usually come from different sources with diverse and heterogeneous formats. The design of complex data-driven models is often shaped by the structure of the data available in research projects. Hence, applying such models to other case studies requires either to get similar data or to transform new data to fit the model inputs. It is the case of agent-based models (ABMs) that use advanced data structures such as Geographic Information Systems data. We faced this problem in the LittoSIM-GEN project when generalizing our participatory flooding model (LittoSIM) to new territories. From this experience, we provide a mapping approach to structure, describe, and automatize the integration of geospatial data into ABMs

    Institutional Trust in Medicine in the Age of Artificial Intelligence

    Get PDF
    It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the functional role of trust changes across contexts and objects is further complicated when communities and individuals mediate it through technologies, and even more so when that mediation involves artificial intelligence (AI) and machine learning (ML). In this chapter I look at the ways in which trust in institutions, and specifically the medical profession, is affected by the use of AI and ML. There are two key elements of this analysis. The first is a disanalogy between institutional trust in medicine and institutional trust in science (Irzik and Kurtulmus 2021, 2019; Kitcher 2001). I note that as AI and ML become a more prominent part of medicine, trust in a medical institution becomes more like trust in a scientific institution. This is problematic for institutional trust in medicine and the practice of medicine, since institutional trust in science has been undermined by, among other things, the spread of misinformation online and the replication crisis (Romero 2019). There is also a strong analogy between the psychological state of the person who trusts a scientific report or testimony and the psychological state of a patient who trusts individual recommendations made by a medical professional in a clinical setting. In both cases, institutional trust makes it less likely that a mistake or malfeasance will result in reactive attitudes, such as blame or anger, directed at other individual members of that institution. However, it also renders people vulnerable enough to blame the institution itself. This, with time, can erode trust in the institution and naturally leads to policy recommendations that aim to preserve institutional trust. I survey two ways in which that can be done with institutional trust in medicine in the age of AI and ML

    Double Trouble? The Communication Dimension of the Reproducibility Crisis in Experimental Psychology and Neuroscience

    Get PDF
    Most discussions of the reproducibility crisis focus on its epistemic aspect: the fact that the scientific community fails to follow some norms of scientific investigation, which leads to high rates of irreproducibility via a high rate of false positive findings. The purpose of this paper is to argue that there is a heretofore underappreciated and understudied dimension to the reproducibility crisis in experimental psychology and neuroscience that may prove to be at least as important as the epistemic dimension. This is the communication dimension. The link between communication and reproducibility is immediate: independent investigators would not be able to recreate an experiment whose design or implementation were inadequately described. I exploit evidence of a replicability and reproducibility crisis in computational science, as well as research into quality of reporting to support the claim that a widespread failure to adhere to reporting standards, especially the norm of descriptive completeness, is an important contributing factor in the current reproducibility crisis in experimental psychology and neuroscience

    Cognitive Artifacts and Their Virtues in Scientific Practice

    Get PDF
    This paper proposes a novel way to understand various kinds of scientific representations in terms of cognitive artifacts. It introduces a novel functional taxonomy of cognitive artifacts prevalent in scientific practice, which covers a huge diversity of their formats, vehicles, and functions. It is argued that toolboxes, conceptual frameworks, theories, models, and individual hypotheses can be understood as supporting our cognitive performance in scientific practice. While all these entities are external representations, their function can be best understood through the conceptual lens of wide cognition. The functional approach suggests that the assessment of knowledge representation in science should be based on functions that cognitive artifacts help us perform. By providing a conceptual link between the functionality of artifacts and their virtues, this approach also recommends an empirical approach to the study of virtues. This implies that the cognitive approach to the study of science can offer some guidance in recent philosophical debates around the nature of scientific theories or models
    corecore