1,371 research outputs found

    Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes

    Get PDF
    I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems.Comment: 35 pages, 3 figures, based on KES 2008 keynote and ALT 2007 / DS 2007 joint invited lectur

    The Illusion of Internal Joy

    No full text
    International audienceJ. Schmidhuber proposes a "theory of fun & intrinsic motivation & creativity" that he has developed over the last two decades. This theory is precise enough to allow the programming of artificial agents exhibiting the requested behaviors. Schmidhuber's theory relies on an explicit 'internal joy drive' implemented by an 'information compression indicator'. In this paper, we show that this indicator is not necessary as soon as the 'brain' implementation involves associative memories, i.e., hierarchical cortical maps. The 'compression factor' is replaced by the 'smallest common activation pattern' in our framework, with the advantage of an immediate and plausible neural implementation. Our conclusion states that the 'internal joy' is an illusion. This remind us of the eliminative materialism position which claims that 'free-will' is also an illusion

    Between order and chaos: The quest for meaningful information

    Get PDF

    VIME: Variational Information Maximizing Exploration

    Full text link
    Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.Comment: Published in Advances in Neural Information Processing Systems 29 (NIPS), pages 1109-111
    • …
    corecore