182,454 research outputs found

    The approach towards equilibrium in a reversible Ising dynamics model -- an information-theoretic analysis based on an exact solution

    Get PDF
    We study the approach towards equilibrium in a dynamic Ising model, the Q2R cellular automaton, with microscopic reversibility and conserved energy for an infinite one-dimensional system. Starting from a low-entropy state with positive magnetisation, we investigate how the system approaches equilibrium characteristics given by statistical mechanics. We show that the magnetisation converges to zero exponentially. The reversibility of the dynamics implies that the entropy density of the microstates is conserved in the time evolution. Still, it appears as if equilibrium, with a higher entropy density is approached. In order to understand this process, we solve the dynamics by formally proving how the information-theoretic characteristics of the microstates develop over time. With this approach we can show that an estimate of the entropy density based on finite length statistics within microstates converges to the equilibrium entropy density. The process behind this apparent entropy increase is a dissipation of correlation information over increasing distances. It is shown that the average information-theoretic correlation length increases linearly in time, being equivalent to a corresponding increase in excess entropy.Comment: 15 pages, 2 figure

    OPTIMAL USE OF COMMUNICATION RESOURCES

    Get PDF
    We study a repeated game with asymmetric information about a dynamic state of nature. In the course of the game, the better informed player can communicate some or all of his information with the other. Our model covers costly and/or bounded communication. We characterize the set of equilibrium payoffs, and contrast these with the communication equilibrium payoffs, which by definition entail no communication costs.Repeated games, communication, entropy

    Information Theory and Knowledge-Gathering

    Get PDF
    It is assumed that human knowledge-building depends on a discrete sequential decision-making process subjected to a stochastic information transmitting environment. This environment randomly transmits Shannon type information-packets to the decision-maker, who examines each of them for relevancy and then determines his optimal choices. Using this set of relevant information-packets, the decision-maker adapts, over time, to the stochastic nature of his environment, and optimizes the subjective expected rate-of-growth of knowledge. The decision-maker’s optimal actions, lead to a decision function that involves his view of the subjective entropy of the environmental process and other important parameters at each stage of the process. Using this model of human behavior, one could create psychometric experiments using computer simulation and real decision-makers, to play programmed games to measure the resulting human performance.decision-making; dynamic programming; entropy; epistemology; information theory; knowledge; sequential processes; subjective probability
    corecore