18 research outputs found

    All Else Being Equal Be Empowered

    Get PDF
    The original publication is available at www.springerlink.com . Copyright Springer DOI : 10.1007/11553090_75The classical approach to using utility functions suffers from the drawback of having to design and tweak the functions on a case by case basis. Inspired by examples from the animal kingdom, social sciences and games we propose empowerment, a rather universal function, defined as the information-theoretic capacity of an agent’s actuation channel. The concept applies to any sensorimotoric apparatus. Empowerment as a measure reflects the properties of the apparatus as long as they are observable due to the coupling of sensors and actuators via the environment.Peer reviewe

    A framework for the local information dynamics of distributed computation in complex systems

    Full text link
    The nature of distributed computation has often been described in terms of the component operations of universal computation: information storage, transfer and modification. We review the first complete framework that quantifies each of these individual information dynamics on a local scale within a system, and describes the manner in which they interact to create non-trivial computation where "the whole is greater than the sum of the parts". We describe the application of the framework to cellular automata, a simple yet powerful model of distributed computation. This is an important application, because the framework is the first to provide quantitative evidence for several important conjectures about distributed computation in cellular automata: that blinkers embody information storage, particles are information transfer agents, and particle collisions are information modification events. The framework is also shown to contrast the computations conducted by several well-known cellular automata, highlighting the importance of information coherence in complex computation. The results reviewed here provide important quantitative insights into the fundamental nature of distributed computation and the dynamics of complex systems, as well as impetus for the framework to be applied to the analysis and design of other systems.Comment: 44 pages, 8 figure

    Complementarity in classical dynamical systems

    Full text link
    The concept of complementarity, originally defined for non-commuting observables of quantum systems with states of non-vanishing dispersion, is extended to classical dynamical systems with a partitioned phase space. Interpreting partitions in terms of ensembles of epistemic states (symbols) with corresponding classical observables, it is shown that such observables are complementary to each other with respect to particular partitions unless those partitions are generating. This explains why symbolic descriptions based on an \emph{ad hoc} partition of an underlying phase space description should generally be expected to be incompatible. Related approaches with different background and different objectives are discussed.Comment: 18 pages, no figure

    Unsupervised model-free representation learning

    No full text
    Abstract. Numerous control and learning problems face the situation where sequences of high-dimensional highly dependent data are available, but no or little feedback is provided to the learner. In such situations it may be useful to find a concise representation of the input signal, that would preserve as much as possible of the relevant information. In this work we are interested in the problems where the relevant information is in the time-series dependence. Thus, the problem can be formalized as follows. Given a series of observations X0,..., Xn coming from a large (high-dimensional) space X, find a representation function f mapping X to a finite space Y such that the series f(X0),..., f(Xn) preserve as much information as possible about the original time-series dependence in X0,..., Xn. For stationary time series, the function f can be selected as the one maximizing the time-series information I∞(f) = h0(f(X)) − h∞(f(X)) where h0(f(X)) is the Shannon entropy of f(X0) and h∞(f(X)) is the entropy rate of the time series f(X0),..., f(Xn),.... In this paper we study the functional I∞(f) from the learning-theoretic point of view. Specifically, we provide some uniform approximation results, and study the behaviour of I∞(f) in the problem of optimal control.
    corecore