1,671 research outputs found

    An optimization principle for deriving nonequilibrium statistical models of Hamiltonian dynamics

    Full text link
    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. As in standard projection operator methods, a set of resolved variables is selected to capture the slow, macroscopic behavior of the system, and the family of quasi-equilibrium probability densities on phase space corresponding to these resolved variables is employed as a statistical model. The macroscopic dynamics of the mean resolved variables is determined by optimizing over paths of these probability densities. Specifically, a cost function is introduced that quantifies the lack-of-fit of such paths to the underlying microscopic dynamics; it is an ensemble-averaged, squared-norm of the residual that results from submitting a path of trial densities to the Liouville equation. The evolution of the macrostate is estimated by minimizing the time integral of the cost function. The value function for this optimization satisfies the associated Hamilton-Jacobi equation, and it determines the optimal relation between the statistical parameters and the irreversible fluxes of the resolved variables, thereby closing the reduced dynamics. The resulting equations for the macroscopic variables have the generic form of governing equations for nonequilibrium thermodynamics, and they furnish a rational extension of the classical equations of linear irreversible thermodynamics beyond the near-equilibrium regime. In particular, the value function is a thermodynamic potential that extends the classical dissipation function and supplies the nonlinear relation between thermodynamics forces and fluxes

    Bits from Biology for Computational Intelligence

    Get PDF
    Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems

    Information driven self-organization of complex robotic behaviors

    Get PDF
    Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.Comment: 29 pages, 12 figure

    Optimal Estimation via Nonanticipative Rate Distortion Function and Applications to Time-Varying Gauss-Markov Processes

    Full text link
    In this paper, we develop {finite-time horizon} causal filters using the nonanticipative rate distortion theory. We apply the {developed} theory to {design optimal filters for} time-varying multidimensional Gauss-Markov processes, subject to a mean square error fidelity constraint. We show that such filters are equivalent to the design of an optimal \texttt{\{encoder, channel, decoder\}}, which ensures that the error satisfies {a} fidelity constraint. Moreover, we derive a universal lower bound on the mean square error of any estimator of time-varying multidimensional Gauss-Markov processes in terms of conditional mutual information. Unlike classical Kalman filters, the filter developed is characterized by a reverse-waterfilling algorithm, which ensures {that} the fidelity constraint is satisfied. The theoretical results are demonstrated via illustrative examples.Comment: 35 pages, 6 figures, submitted for publication in SIAM Journal on Control and Optimization (SICON

    Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes

    Get PDF
    Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by mainly redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone
    • …
    corecore