60,503 research outputs found

    Intrinsic Motivation Systems for Autonomous Mental Development

    Get PDF
    Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robot’s activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology. Key words: Active learning, autonomy, behavior, complexity, curiosity, development, developmental trajectory, epigenetic robotics, intrinsic motivation, learning, reinforcement learning, values

    Empirical Encounters with Computational Irreducibility and Unpredictability

    Full text link
    There are several forms of irreducibility in computing systems, ranging from undecidability to intractability to nonlinearity. This paper is an exploration of the conceptual issues that have arisen in the course of investigating speed-up and slowdown phenomena in small Turing machines. We present the results of a test that may spur experimental approaches to the notion of computational irreducibility. The test involves a systematic attempt to outrun the computation of a large number of small Turing machines (all 3 and 4 state, 2 symbol) by means of integer sequence prediction using a specialized function finder program. This massive experiment prompts an investigation into rates of convergence of decision procedures and the decidability of sets in addition to a discussion of the (un)predictability of deterministic computing systems in practice. We think this investigation constitutes a novel approach to the discussion of an epistemological question in the context of a computer simulation, and thus represents an interesting exploration at the boundary between philosophical concerns and computational experiments.Comment: 18 pages, 4 figure

    Phase Transition and Strong Predictability

    Full text link
    The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed in our former work [K. Tadaki, Local Proceedings of CiE 2008, pp.425-434, 2008], where we introduced the notion of thermodynamic quantities into AIT. These quantities are real functions of temperature T>0. The values of all the thermodynamic quantities diverge when T exceeds 1. This phenomenon corresponds to phase transition in statistical mechanics. In this paper we introduce the notion of strong predictability for an infinite binary sequence and then apply it to the partition function Z(T), which is one of the thermodynamic quantities in AIT. We then reveal a new computational aspect of the phase transition in AIT by showing the critical difference of the behavior of Z(T) between T=1 and T<1 in terms of the strong predictability for the base-two expansion of Z(T).Comment: 5 pages, LaTeX2e, no figure

    Coarse-graining of cellular automata, emergence, and the predictability of complex systems

    Full text link
    We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram's classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well with apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.Comment: 18 pages, 9 figure

    From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing

    Get PDF
    In recent studies [1][13][12] Recurrent Neural Networks were used for generative processes and their surprising performance can be explained by their ability to create good predictions. In addition, data compression is also based on predictions. What the problem comes down to is whether a data compressor could be used to perform as well as recurrent neural networks in natural language processing tasks. If this is possible,then the problem comes down to determining if a compression algorithm is even more intelligent than a neural network in specific tasks related to human language. In our journey we discovered what we think is the fundamental difference between a Data Compression Algorithm and a Recurrent Neural Network

    Informational and Causal Architecture of Discrete-Time Renewal Processes

    Full text link
    Renewal processes are broadly used to model stochastic behavior consisting of isolated events separated by periods of quiescence, whose durations are specified by a given probability law. Here, we identify the minimal sufficient statistic for their prediction (the set of causal states), calculate the historical memory capacity required to store those states (statistical complexity), delineate what information is predictable (excess entropy), and decompose the entropy of a single measurement into that shared with the past, future, or both. The causal state equivalence relation defines a new subclass of renewal processes with a finite number of causal states despite having an unbounded interevent count distribution. We use these formulae to analyze the output of the parametrized Simple Nonunifilar Source, generated by a simple two-state hidden Markov model, but with an infinite-state epsilon-machine presentation. All in all, the results lay the groundwork for analyzing processes with infinite statistical complexity and infinite excess entropy.Comment: 18 pages, 9 figures, 1 table; http://csc.ucdavis.edu/~cmg/compmech/pubs/dtrp.ht

    P3b reflects periodicity in linguistic sequences

    Get PDF
    Temporal predictability is thought to affect stimulus processing by facilitating the allocation of attentional resources. Recent studies have shown that periodicity of a tonal sequence results in a decreased peak latency and a larger amplitude of the P3b compared with temporally random, i.e., aperiodic sequences. We investigated whether this applies also to sequences of linguistic stimuli (syllables), although speech is usually aperiodic. We compared aperiodic syllable sequences with two temporally regular conditions. In one condition, the interval between syllable onset was fixed, whereas in a second condition the interval between the syllables’ perceptual center (p-center) was kept constant. Event-related potentials were assessed in 30 adults who were instructed to detect irregularities in the stimulus sequences. We found larger P3b amplitudes for both temporally predictable conditions as compared to the aperiodic condition and a shorter P3b latency in the p-center condition than in both other conditions. These findings demonstrate that even in acoustically more complex sequences such as syllable streams, temporal predictability facilitates the processing of deviant stimuli. Furthermore, we provide first electrophysiological evidence for the relevance of the p-center concept in linguistic stimulus processing

    An iterative algorithm for parametrization of shortest length shift registers over finite rings

    Get PDF
    The construction of shortest feedback shift registers for a finite sequence S_1,...,S_N is considered over the finite ring Z_{p^r}. A novel algorithm is presented that yields a parametrization of all shortest feedback shift registers for the sequence of numbers S_1,...,S_N, thus solving an open problem in the literature. The algorithm iteratively processes each number, starting with S_1, and constructs at each step a particular type of minimal Gr\"obner basis. The construction involves a simple update rule at each step which leads to computational efficiency. It is shown that the algorithm simultaneously computes a similar parametrization for the reciprocal sequence S_N,...,S_1.Comment: Submitte
    • …
    corecore