17,379 research outputs found

    The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy

    Full text link
    The principle of maximum entropy (Maxent) is often used to obtain prior probability distributions as a method to obtain a Gibbs measure under some restriction giving the probability that a system will be in a certain state compared to the rest of the elements in the distribution. Because classical entropy-based Maxent collapses cases confounding all distinct degrees of randomness and pseudo-randomness, here we take into consideration the generative mechanism of the systems considered in the ensemble to separate objects that may comply with the principle under some restriction and whose entropy is maximal but may be generated recursively from those that are actually algorithmically random offering a refinement to classical Maxent. We take advantage of a causal algorithmic calculus to derive a thermodynamic-like result based on how difficult it is to reprogram a computer code. Using the distinction between computable and algorithmic randomness we quantify the cost in information loss associated with reprogramming. To illustrate this we apply the algorithmic refinement to Maxent on graphs and introduce a Maximal Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a generalisation over previous approaches. We discuss practical implications of evaluation of network randomness. Our analysis provides insight in that the reprogrammability asymmetry appears to originate from a non-monotonic relationship to algorithmic probability. Our analysis motivates further analysis of the origin and consequences of the aforementioned asymmetries, reprogrammability, and computation.Comment: 30 page

    Physical portrayal of computational complexity

    Get PDF
    Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because many natural processes have been recognized to complete in non-polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable because, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving problems in the class NP, decisions will affect subsequently available sets of decisions. The state space of a non-deterministic finite automaton is evolving due to the computation itself hence it cannot be efficiently contracted using a deterministic finite automaton that will arrive at a solution in super-polynomial time. The solution of the NP problem itself is verifiable in polynomial time (P) because the corresponding state is stationary. Likewise the class P set of states does not depend on computational history hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the class P set of states is inherently smaller than the set of class NP. Since the computational time to contract a given set is proportional to dissipation, the computational complexity class P is a subset of NP.Comment: 16, pages, 7 figure

    Reaction kinetics in open reactors and serial transfers between closed reactors

    Full text link
    Kinetic theory and thermodynamics of reaction networks are extended to the out-of-equilibrium dynamics of continuous-flow stirred tank reactors (CSTR) and serial transfers. On the basis of their stoichiometry matrix, the conservation laws and the cycles of the network are determined for both dynamics. It is shown that the CSTR and serial transfer dynamics are equivalent in the limit where the time interval between the transfers tends to zero proportionally to the ratio of the fractions of fresh to transferred solutions. These results are illustrated with finite cross-catalytic reaction network and an infinite reaction network describing mass exchange between polymers. Serial transfer dynamics is typically used in molecular evolution experiments in the context of research on the origins of life. The present study is shedding a new light on the role played by serial transfer parameters in these experiments.Comment: 11 pages, 7 figure

    Thermodynamics of Neutral Protein Evolution

    Full text link
    Naturally evolving proteins gradually accumulate mutations while continuing to fold to thermodynamically stable native structures. This process of neutral protein evolution is an important mode of genetic change, and forms the basis for the molecular clock. Here we present a mathematical theory that predicts the number of accumulated mutations, the index of dispersion, and the distribution of stabilities in an evolving protein population from knowledge of the stability effects (ddG values) for single mutations. Our theory quantitatively describes how neutral evolution leads to marginally stable proteins, and provides formulae for calculating how fluctuations in stability cause an overdispersion of the molecular clock. It also shows that the structural influences on the rate of sequence evolution that have been observed in earlier simulations can be calculated using only the single-mutation ddG values. We consider both the case when the product of the population size and mutation rate is small and the case when this product is large, and show that in the latter case proteins evolve excess mutational robustness that is manifested by extra stability and increases the rate of sequence evolution. Our basic method is to treat protein evolution as a Markov process constrained by a minimal requirement for stable folding, enabling an evolutionary description of the proteins solely in terms of the experimentally measureable ddG values. All of our theoretical predictions are confirmed by simulations with model lattice proteins. Our work provides a mathematical foundation for understanding how protein biophysics helps shape the process of evolution

    Thermodynamic forces, flows, and Onsager coefficients in complex networks

    Full text link
    We present Onsager formalism applied to random networks with arbitrary degree distribution. Using the well-known methods of non-equilibrium thermodynamics we identify thermodynamic forces and their conjugated flows induced in networks as a result of single node degree perturbation. The forces and the flows can be understood as a response of the system to events, such as random removal of nodes or intentional attacks on them. Finally, we show that cross effects (such as thermodiffusion, or thermoelectric phenomena), in which one force may not only give rise to its own corresponding flow, but to many other flows, can be observed also in complex networks.Comment: 4 pages, 2 figure

    Andrzej Pekalski networks of scientific interests with internal degrees of freedom through self-citation analysis

    Get PDF
    Old and recent theoretical works by Andrzej Pekalski (APE) are recalled as possible sources of interest for describing network formation and clustering in complex (scientific) communities, through self-organisation and percolation processes. Emphasis is placed on APE self-citation network over four decades. The method is that used for detecting scientists field mobility by focusing on author's self-citation, co-authorships and article topics networks as in [1,2]. It is shown that APE's self-citation patterns reveal important information on APE interest for research topics over time as well as APE engagement on different scientific topics and in different networks of collaboration. Its interesting complexity results from "degrees of freedom" and external fields leading to so called internal shock resistance. It is found that APE network of scientific interests belongs to independent clusters and occurs through rare or drastic events as in irreversible "preferential attachment processes", similar to those found in usual mechanics and thermodynamics phase transitions.Comment: 7 pages, 1 table, 44 references, submitted to Int J Mod Phys
    corecore