8 research outputs found

    Explaining Emergence

    Full text link
    Emergence is a pregnant property in various fields. It is the fact for a phenomenon to appear surprisingly and to be such that it seems at first sight that it is not possible to predict its apparition. That is the reason why it has often been said that emergence is a subjective property relative to the observer. Some mathematical systems having very simple and deterministic rules nevertheless show emergent behavior. Studying these systems shed a new light on the subject and allows to define a new concept, computational irreducibility, which deals with behaviors that even though they are totally deterministic cannot be predicted without simulating them. Computational irreducibility is then a key for understanding emergent phenomena from an objective point of view that does not need the mention of any observer.Comment: 13 pages, 15 figures, to appear in the forthcoming proceedings of the UM6P Science Week 2023 Complexity Summi

    Autopoiesis of the artificial: from systems to cognition

    Get PDF
    In the seminal work on autopoiesis by Varela, Maturana, and Uribe, they start by addressing the confusion between processes that are history dependent and processes that are history independent in the biological world. The former is particularly linked to evolution and ontogenesis, while the latter pertains to the organizational features of biological individuals. Varela, Maturana, and Uribe reject this framework and propose their original theory of autopoietic organization, which emphasizes the strong complementarity of temporal and non-temporal phenomena. They argue that the dichotomy between structure and organization lies at the core of the unity of living systems. By opposing history-dependent and history-independent processes, methodological challenges arise in explaining phenomena related to living systems and cognition. Consequently, Maturana and Varela reject this approach in defining autopoietic organization. I argue, however, that this relationship presents an issue that can be found in recent developments of the science of artificial intelligence (AI) in different ways, giving rise to related concerns. While highly capable AI systems exist that can perform cognitive tasks, their internal workings and the specific contributions of their components to the overall system behavior, understood as a unified whole, remain largely uninterpretable. This article explores the connection between biological systems, cognition, and recent developments in AI systems that could potentially be linked to autopoiesis and related concepts such as autonomy and organization. The aim is to assess the advantages and disadvantages of employing autopoiesis in the synthetic (artificial) explanation for biological cognitive systems and to determine if and how the notion of autopoiesis can still be fruitful in this perspective

    Computational Irreducibility and Computational Analogy

    No full text
    In a previous paper [1], we provided a formal definition for the concept of computational irreducibility (CIR), that is, the fact that for a function f from N to N it is impossible to compute f (n) without following approximately the same path as computing successively all the values f (i) from i = 1 to n. Our definition is based on the concept of enumerating Turing machines (E-Turing machines) and on the concept of approximation of E-Turing machines, for which we also gave a formal definition. Here, we make these definitions more precise through some modifications intended to improve the robustness of the concept. We then introduce a new concept: the computational analogy, and prove some properties of the functions used. Computational analogy is an equivalence relation that allows partitioning the set of computable functions in classes whose members have the same properties regarding their CIR and their computational complexity. Introduction 1

    Explanatory artificial intelligence (YAI): human-centered explanations of explainable AI and complex data

    Get PDF
    In this paper we introduce a new class of software tools engaged in delivering successful explanations of complex processes on top of basic Explainable AI (XAI) software systems. These tools, that we call cumulatively Explanatory AI (YAI) systems, enhance the quality of the basic output of a XAI by adopting a user-centred approach to explanation that can cater to the individual needs of the explainees with measurable improvements in usability. Our approach is based on Achinstein’s theory of explanations, where explaining is an illocutionary (i.e., broad yet pertinent and deliberate) act of pragmatically answering a question. Accordingly, user-centrality enters in the equation by considering that the overall amount of information generated by answering all questions can rapidly become overwhelming and that individual users may perceive the need to explore just a few of them. In this paper, we give the theoretical foundations of YAI, formally defining a user-centred explanatory tool and the space of all possible explanations, or explanatory space, generated by it. To this end, we frame the explanatory space as an hypergraph of knowledge and we identify a set of heuristics and properties that can help approximating a decomposition of it into a tree-like representation for efficient and user-centred explanation retrieval. Finally, we provide some old and new empirical results to support our theory, showing that explanations are more than textual or visual presentations of the sole information provided by a XAI

    Setting the demons loose: computational irreducibility does not guarantee unpredictability or emergence

    Get PDF
    A phenomenon resulting from a computationally irreducible (or computationally incompressible) process is supposedly unpredictable except via simulation. This notion of unpredictability has been deployed to formulate recent accounts of computational emergence. Via a technical analysis, I show that computational irreducibility can establish the impossibility of prediction only with respect to maximum standards of precision. By articulating the graded nature of prediction, I show that unpredictability to maximum standards is not equivalent to being unpredictable in general. I conclude that computational irreducibility fails to fulfill its assigned philosophical roles in theories of computational emergence

    Setting the demons loose: computational irreducibility does not guarantee unpredictability or emergence

    Get PDF
    A phenomenon resulting from a computationally irreducible (or computationally incompressible) process is supposedly unpredictable except via simulation. This notion of unpredictability has been deployed to formulate some recent accounts of computational emergence. Via a technical analysis of computational irreducibility, I show that computationally irreducibility can establish the impossibility of prediction only with respect to maximum standards of precision. By articulating the graded nature of prediction, I show that unpredictability to maximum standards is not equivalent to being unpredictable in general. I conclude that computational irreducibility fails to fulfill its assigned philosophical roles in theories of computational emergence

    Unpredictability and computational irreducibility

    No full text
    We explore several concepts for analyzing the intuitive notion of computational irreducibility and we propose a robust formal definition, first in the field of cellular automata and then in the general field of any computable function f from N to N. We prove that, through a robust definition of what means "to be unable to compute the nth step without having to follow the same path than simulating the automaton or the function", this implies genuinely, as intuitively expected, that if the behavior of an object is computationally irreducible, no computation of its nth state can be faster than the simulation itself
    corecore