15 research outputs found

    Consciousness and Complexity: Neurobiological Naturalism and Integrated Information Theory

    Get PDF
    In this paper, we take a meta-theoretical stance and aim to compare and assess two conceptual frameworks that endeavor to explain phenomenal experience. In particular, we compare Feinberg & Mallatt’s Neurobiological Naturalism (NN) and Tononi’s and colleagues' Integrated Information Theory (IIT), given that the former pointed out some similarities between the two theories (Feinberg & Mallatt 2016c-d). To probe their similarity, we first give a general introduction to both frameworks. Next, we expound a ground plan for carrying out our analysis. We move on to articulate a philosophical profile of NN and IIT, addressing their ontological commitments and epistemological foundations. Finally, we compare the two point-by-point, also discussing how they stand on the issue of artificial consciousness

    Lightning Talk:"I solemnly pledge" A Manifesto for Personal Responsibility in the Engineering of Academic Software

    Get PDF
    International audienceSoftware is fundamental to academic research work, both as part of the method and as the result of research. In June 2016 25 people gathered at Schloss Dagstuhl for a week-long Perspectives Workshop and began to develop a manifesto which places emphasis on the scholarly value of academic software and on personal responsibility. Twenty pledges cover the recognition of academic software, the academic software process and the intellectual content of academic software. This is still work in progress. Through this lightning talk, we aim to get feedback and hone these further, as well as to inspire the WSSSPE audience to think about actions they can take themselves rather than actions they want others to take. We aim to publish a more fully developed Dagstuhl Manifesto by December 2016

    Inference for stochastic chemical kinetics using moment equations and system size expansion

    Get PDF
    Quantitative mechanistic models are valuable tools for disentangling biochemical pathways and for achieving a comprehensive understanding of biological systems. However, to be quantitative the parameters of these models have to be estimated from experimental data. In the presence of significant stochastic fluctuations this is a challenging task as stochastic simulations are usually too time-consuming and a macroscopic description using reaction rate equations (RREs) is no longer accurate. In this manuscript, we therefore consider moment-closure approximation (MA) and the system size expansion (SSE), which approximate the statistical moments of stochastic processes and tend to be more precise than macroscopic descriptions. We introduce gradient-based parameter optimization methods and uncertainty analysis methods for MA and SSE. Efficiency and reliability of the methods are assessed using simulation examples as well as by an application to data for Epo-induced JAK/STAT signaling. The application revealed that even if merely population-average data are available, MA and SSE improve parameter identifiability in comparison to RRE. Furthermore, the simulation examples revealed that the resulting estimates are more reliable for an intermediate volume regime. In this regime the estimation error is reduced and we propose methods to determine the regime boundaries. These results illustrate that inference using MA and SSE is feasible and possesses a high sensitivity

    The Transcendental Deduction of Integrated Information Theory: Connecting the Axioms, Postulates, and Identity Through Categories

    Get PDF
    This paper deals with a foundational aspect of Integrated Information Theory (IIT) of consciousness: the nature of the relation between the axioms of phenomenology and the postulates of cause-effect power. There has been a lack of clarity in the literature regarding this crucial issue, for which IIT has received much criticism of its axiomatic method and basic tenets. The present contribution elucidates the problem by means of a categorial analysis of the theory’s foundations. Its main results are that: (i) IIT has a set of nine fundamental concepts of reason, called categories, which constitute its categorial lexicon and through which it formulates a system of principles incorporating the axioms, the postulates, and the central identity; and (ii) the connection between the axioms and postulates is grounded by their common root in this categorial lexicon, the categories of which find their justification by means of a phenomenological and transcendental deduction. Some further results are the unique origin of axioms and postulates in the categories; the distinction between conceptual and formalized postulates; a clarification of the uniqueness problem of categorial lexica in general; and an IIT account of objectivity by explicating how the physical is (re)defined by means of categories. All of this is put to use against various criticism targeting IIT’s theoretical core. If successful, the proposed interpretation illuminates a central issue in the contemporary study of consciousness and contributes to an environment of mutual understanding between defenders and critics of the theory

    Facing up to the Hard Problem as an Integrated Information Theorist

    Get PDF
    In this paper we provide a philosophical analysis of the Hard Problem of consciousness and the implications of conceivability scenarios for current neuroscientific research. In particular, we focus on one of the most prominent neuroscientific theories of consciousness, Integrated Information Theory (IIT). After a brief introduction on IIT, we present Chalmers’ original formulation and propose our own Layered View of the Hard Problem, showing how two separate issues can be distinguished. More specifically, we argue that it’s possible to disentangle a Core Problem of Consciousness from a Layered Hard Problem, the latter being essentially connected to Chalmers’ conceivability argument. We then assess the relation between the Hard Problem and IIT, showing how the theory resists conceivability scenarios, and how it is equipped to face up to the hard problem in its broadest acceptation

    Consciousness and Complexity: Neurobiological Naturalism and Integrated Information Theory

    Get PDF
    In this paper we take a meta-theoretical stance to compare two frameworks that endeavor to explain phenomenal experience. In particular, we compare Feinberg & Mallatt’s Neurobiological Naturalism (NN) and Tononi and colleagues’ Integrated Information Theory (IIT), given that the former pointed out some similarities between the two theories (Feinberg & Mallatt 2016c-d). To probe how similar they are, we first give a general introduction to both frameworks. Next, we provide a ground plan for carrying out our analysis. We move on to articulate a philosophical profile of NN and IIT, addressing their ontological commitments and epistemological foundations. Finally, we compare the two point-by-point, also discussing how they stand on the issue of artificial consciousness. We find the two theories to be constitutionally different. IIT treats consciousness as a fundamental feature of the world (its ontology) and investigates its structure from the mathematical standpoint of integrated information (its epistemology). NN, by contrast, treats consciousness as an emerging feature confined to living organisms with complex brains (its ontology) and investigates with neurobiology

    Wrong, but useful:negotiating uncertainty in infectious disease modelling

    Get PDF
    For infectious disease dynamical models to inform policy for containment of infectious diseases the models must be able to predict; however, it is well recognised that such prediction will never be perfect. Nevertheless, the consensus is that although models are uncertain, some may yet inform effective action. This assumes that the quality of a model can be ascertained in order to evaluate sufficiently model uncertainties, and to decide whether or not, or in what ways or under what conditions, the model should be ‘used’. We examined the uncertainty in modelling, utilising a range of data: interviews with scientists, policy-makers and advisers, and analysis of policy documents, scientific publications and reports of major inquiries into key livestock epidemics. We show that the discourse of uncertainty in infectious disease models is multi-layered, flexible, contingent, embedded in context and plays a critical role in negotiating model credibility. We argue that usability and stability of a model is an outcome of the negotiation that occurs within the networks and discourses surrounding it. This negotiation employs a range of discursive devices that renders uncertainty in infectious disease modelling a plastic quality that is amenable to ‘interpretive flexibility’. The utility of models in the face of uncertainty is a function of this flexibility, the negotiation this allows, and the contexts in which model outputs are framed and interpreted in the decision making process. We contend that rather than being based predominantly on beliefs about quality, the usefulness and authority of a model may at times be primarily based on its functional status within the broad social and political environment in which it acts
    corecore