171 research outputs found

    A Tale of Two Animats: What does it take to have goals?

    Full text link
    What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms ('animats') controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that evolved in the same simple environment: one with purely feedforward connections between its elements, the other with an integrated set of elements that causally constrain each other. While both types of brains 'process' information about their environment and are equally fit, only the integrated one forms a causally autonomous entity above a background of external influences. This suggests that to assess whether goals are meaningful for a system itself, it is important to understand what the system is, rather than what it does.Comment: This article is a contribution to the FQXi 2016-2017 essay contest "Wandering Towards a Goal

    When is an action caused from within? Quantifying the causal chain leading to actions in simulated agents

    Full text link
    An agent's actions can be influenced by external factors through the inputs it receives from the environment, as well as internal factors, such as memories or intrinsic preferences. The extent to which an agent's actions are "caused from within", as opposed to being externally driven, should depend on its sensor capacity as well as environmental demands for memory and context-dependent behavior. Here, we test this hypothesis using simulated agents ("animats"), equipped with small adaptive Markov Brains (MB) that evolve to solve a perceptual-categorization task under conditions varied with regards to the agents' sensor capacity and task difficulty. Using a novel formalism developed to identify and quantify the actual causes of occurrences ("what caused what?") in complex networks, we evaluate the direct causes of the animats' actions. In addition, we extend this framework to trace the causal chain ("causes of causes") leading to an animat's actions back in time, and compare the obtained spatio-temporal causal history across task conditions. We found that measures quantifying the extent to which an animat's actions are caused by internal factors (as opposed to being driven by the environment through its sensors) varied consistently with defining aspects of the task conditions they evolved to thrive in.Comment: Submitted and accepted to Alife 2019 conference. Revised version: edits include adding more references to relevant work and clarifying minor points in response to reviewer

    The Role of Conditional Independence in the Evolution of Intelligent Systems

    Full text link
    Systems are typically made from simple components regardless of their complexity. While the function of each part is easily understood, higher order functions are emergent properties and are notoriously difficult to explain. In networked systems, both digital and biological, each component receives inputs, performs a simple computation, and creates an output. When these components have multiple outputs, we intuitively assume that the outputs are causally dependent on the inputs but are themselves independent of each other given the state of their shared input. However, this intuition can be violated for components with probabilistic logic, as these typically cannot be decomposed into separate logic gates with one output each. This violation of conditional independence on the past system state is equivalent to instantaneous interaction --- the idea is that some information between the outputs is not coming from the inputs and thus must have been created instantaneously. Here we compare evolved artificial neural systems with and without instantaneous interaction across several task environments. We show that systems without instantaneous interactions evolve faster, to higher final levels of performance, and require fewer logic components to create a densely connected cognitive machinery.Comment: Original Abstract submitted to the GECCO conference 2017 Berli

    PyPhi: A toolbox for integrated information theory

    Full text link
    Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi's functionality in the course of analyzing an example system, and then describe details of the algorithm's design and implementation. PyPhi can be installed with Python's package manager via the command 'pip install pyphi' on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi . Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io/ . The pyphi-users mailing list can be joined at https://groups.google.com/forum/#!forum/pyphi-users . A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html .Comment: 22 pages, 4 figures, 6 pages of appendices. Supporting information "S1 Calculating Phi" can be found in the ancillary file

    Consciousness and Complexity: Neurobiological Naturalism and Integrated Information Theory

    Get PDF
    In this paper, we take a meta-theoretical stance and aim to compare and assess two conceptual frameworks that endeavor to explain phenomenal experience. In particular, we compare Feinberg & Mallatt’s Neurobiological Naturalism (NN) and Tononi’s and colleagues' Integrated Information Theory (IIT), given that the former pointed out some similarities between the two theories (Feinberg & Mallatt 2016c-d). To probe their similarity, we first give a general introduction to both frameworks. Next, we expound a ground plan for carrying out our analysis. We move on to articulate a philosophical profile of NN and IIT, addressing their ontological commitments and epistemological foundations. Finally, we compare the two point-by-point, also discussing how they stand on the issue of artificial consciousness

    Intelligent systems: towards a new synthetic agenda

    Get PDF
    • …
    corecore