203 research outputs found

    Constructing quantum games from non-factorizable joint probabilities

    Get PDF
    A probabilistic framework is developed that gives a unifying perspective on both the classical and the quantum games. We suggest exploiting peculiar probabilities involved in Einstein-Podolsky-Rosen (EPR) experiments to construct quantum games. In our framework a game attains classical interpretation when joint probabilities are factorizable and a quantum game corresponds when these probabilities cannot be factorized. We analyze how non-factorizability changes Nash equilibria in two-player games while considering the games of Prisoner's Dilemma, Stag Hunt, and Chicken. In this framework we find that for the game of Prisoner's Dilemma even non-factorizable EPR joint probabilities cannot be helpful to escape from the classical outcome of the game. For a particular version of the Chicken game, however, we find that the two non-factorizable sets of joint probabilities, that maximally violates the Clauser-Holt-Shimony-Horne (CHSH) sum of correlations, indeed result in new Nash equilibria.Comment: Revised in light of referee's comments, submitted to Physical Review

    Robot life: simulation and participation in the study of evolution and social behavior.

    Get PDF
    This paper explores the case of using robots to simulate evolution, in particular the case of Hamilton's Law. The uses of robots raises several questions that this paper seeks to address. The first concerns the role of the robots in biological research: do they simulate something (life, evolution, sociality) or do they participate in something? The second question concerns the physicality of the robots: what difference does embodiment make to the role of the robot in these experiments. Thirdly, how do life, embodiment and social behavior relate in contemporary biology and why is it possible for robots to illuminate this relation? These questions are provoked by a strange similarity that has not been noted before: between the problem of simulation in philosophy of science, and Deleuze's reading of Plato on the relationship of ideas, copies and simulacra

    On malfunctioning software

    Get PDF
    Artefacts do not always do what they are supposed to, due to a variety of reasons, including manufacturing problems, poor maintenance, and normal wear-and-tear. Since software is an artefact, it should be subject to malfunctioning in the same sense in which other artefacts can malfunction. Yet, whether software is on a par with other artefacts when it comes to malfunctioning crucially depends on the abstraction used in the analysis. We distinguish between “negative” and “positive” notions of malfunction. A negative malfunction, or dysfunction, occurs when an artefact token either does not (sometimes) or cannot (ever) do what it is supposed to. A positive malfunction, or misfunction, occurs when an artefact token may do what is supposed to but, at least occasionally, it also yields some unintended and undesirable effects. We argue that software, understood as type, may misfunction in some limited sense, but cannot dysfunction. Accordingly, one should distinguish software from other technical artefacts, in view of their design that makes dysfunction impossible for the former, while possible for the latter

    The cognitive integration of scientific instruments: Information, situated cognition, and scientific practice

    Get PDF
    Researchers in the biological and biomedical sciences, particularly those working in laboratories, use a variety of artifacts to help them perform their cognitive tasks. This paper analyses the relationship between researchers and cognitive artifacts in terms of integration. It first distinguishes different categories of cognitive artifacts used in biological practice on the basis of their informational properties. This results in a novel classification of scientific instruments, conducive to an analysis of the cognitive interactions between researchers and artifacts. It then uses a multidimensional framework in line with complementarity-based extended and distributed cognition theory to conceptualize how deeply instruments in different informational categories are integrated into the cognitive systems of their users. The paper concludes that the degree of integration depends on various factors, including the amount of informational malleability, the intensity and kind of information flow between agent and artifact, the trustworthiness of the information, the procedural and informational transparency, and the degree of individualisation

    Trends in parameterization, economics and host behaviour in influenza pandemic modelling: a review and reporting protocol.

    Get PDF
    BACKGROUND: The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. METHODS: We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic. RESULTS: We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically. CONCLUSIONS: We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison
    corecore