102 research outputs found

    A Toom rule that increases the thickness of sets

    Full text link
    Toom's north-east-self voting cellular automaton rule R is known to suppress small minorities. A variant which we call R^+ is also known to turn an arbitrary initial configuration into a homogenous one (without changing the ones that were homogenous to start with). Here we show that R^+ always increases a certain property of sets called thickness. This result is intended as a step towards a proof of the fast convergence towards consensus under R^+. The latter is observable experimentally, even in the presence of some noise.Comment: 16 pages, 8 figure

    Algorithmic statistics: forty years later

    Full text link
    Algorithmic statistics has two different (and almost orthogonal) motivations. From the philosophical point of view, it tries to formalize how the statistics works and why some statistical models are better than others. After this notion of a "good model" is introduced, a natural question arises: it is possible that for some piece of data there is no good model? If yes, how often these bad ("non-stochastic") data appear "in real life"? Another, more technical motivation comes from algorithmic information theory. In this theory a notion of complexity of a finite object (=amount of information in this object) is introduced; it assigns to every object some number, called its algorithmic complexity (or Kolmogorov complexity). Algorithmic statistic provides a more fine-grained classification: for each finite object some curve is defined that characterizes its behavior. It turns out that several different definitions give (approximately) the same curve. In this survey we try to provide an exposition of the main results in the field (including full proofs for the most important ones), as well as some historical comments. We assume that the reader is familiar with the main notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde

    MDL Convergence Speed for Bernoulli Sequences

    Get PDF
    The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is finitely bounded, implying convergence with probability one, and (b) it additionally specifies the convergence speed. For MDL, in general one can only have loss bounds which are finite but exponentially larger than those for Bayes mixtures. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable Bernoulli classes. This implies a small bound (comparable to the one for Bayes mixtures) for certain important model classes. We discuss the application to Machine Learning tasks such as classification and hypothesis testing, and generalization to countable classes of i.i.d. models.Comment: 28 page

    Cellular automata for the self-stabilisation of colourings and tilings

    Get PDF
    International audienceWe examine the problem of self-stabilisation, as introduced by Dijkstra in the 1970's, in the context of cellular automata stabilising on k-colourings, that is, on infinite grids which are coloured with k distinct colours in such a way that adjacent cells have different colours. Suppose that for whatever reason (e.g., noise, previous usage, tampering by an adversary), the colours of a finite number of cells in a valid k-colouring are modified, thus introducing errors. Is it possible to reset the system into a valid k-colouring with only the help of a local rule? In other words, is there a cellular automaton which, starting from any finite perturbation of a valid k-colouring, would always reach a valid k-colouring in finitely many steps? We discuss the different cases depending on the number of colours, and propose some deterministic and probabilistic rules which solve the problem for k = 3. We also explain why the case k = 3 is more delicate. Finally, we propose some insights on the more general setting of this problem, passing from k-colourings to other tilings (subshifts of finite type)

    On the Communication Complexity of Secure Computation

    Full text link
    Information theoretically secure multi-party computation (MPC) is a central primitive of modern cryptography. However, relatively little is known about the communication complexity of this primitive. In this work, we develop powerful information theoretic tools to prove lower bounds on the communication complexity of MPC. We restrict ourselves to a 3-party setting in order to bring out the power of these tools without introducing too many complications. Our techniques include the use of a data processing inequality for residual information - i.e., the gap between mutual information and G\'acs-K\"orner common information, a new information inequality for 3-party protocols, and the idea of distribution switching by which lower bounds computed under certain worst-case scenarios can be shown to apply for the general case. Using these techniques we obtain tight bounds on communication complexity by MPC protocols for various interesting functions. In particular, we show concrete functions that have "communication-ideal" protocols, which achieve the minimum communication simultaneously on all links in the network. Also, we obtain the first explicit example of a function that incurs a higher communication cost than the input length in the secure computation model of Feige, Kilian and Naor (1994), who had shown that such functions exist. We also show that our communication bounds imply tight lower bounds on the amount of randomness required by MPC protocols for many interesting functions.Comment: 37 page

    Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decision Theory

    Get PDF
    Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown distribution. We unify both theories and give strong arguments that the resulting universal AIXI model behaves optimal in any computable environment. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXI^tl, which is still superior to any other time t and space l bounded agent. The computation time of AIXI^tl is of the order t x 2^l.Comment: 8 two-column pages, latex2e, 1 figure, submitted to ijca

    Distributions attaining secret key at a rate of the conditional mutual information

    Full text link
    © International Association for Cryptologic Research 2015. In this paper we consider the problem of extracting secret key from an eavesdropped source pXY Z at a rate given by the conditional mutual information. We investigate this question under three different scenarios: (i) Alice (X) and Bob (Y) are unable to communicate but share common randomness with the eavesdropper Eve (Z), (ii) Alice and Bob are allowed one-way public communication, and (iii) Alice and Bob are allowed two-way public communication. Distributions having a key rate of the conditional mutual information are precisely those in which a “helping” Eve offers Alice and Bob no greater advantage for obtaining secret key than a fully adversarial one. For each of the above scenarios, strong necessary conditions are derived on the structure of distributions attaining a secret key rate of I(X: Y |Z). In obtaining our results, we completely solve the problem of secret key distillation under scenario (i) and identify H(S|Z) to be the optimal key rate using shared randomness, where S is the GĂ cs-Körner Common Information. We thus provide an operational interpretation of the conditional GĂ cs- Körner Common Information. Additionally, we introduce simple example distributions in which the rate I(X: Y |Z) is achievable if and only if two-way communication is allowed

    Universal fluctuations in subdiffusive transport

    Get PDF
    Subdiffusive transport in tilted washboard potentials is studied within the fractional Fokker-Planck equation approach, using the associated continuous time random walk (CTRW) framework. The scaled subvelocity is shown to obey a universal law, assuming the form of a stationary Levy-stable distribution. The latter is defined by the index of subdiffusion alpha and the mean subvelocity only, but interestingly depends neither on the bias strength nor on the specific form of the potential. These scaled, universal subvelocity fluctuations emerge due to the weak ergodicity breaking and are vanishing in the limit of normal diffusion. The results of the analytical heuristic theory are corroborated by Monte Carlo simulations of the underlying CTRW

    Generic two-phase coexistence in nonequilibrium systems

    Full text link
    Gibbs' phase rule states that two-phase coexistence of a single-component system, characterized by an n-dimensional parameter-space, may occur in an n-1-dimensional region. For example, the two equilibrium phases of the Ising model coexist on a line in the temperature-magnetic-field phase diagram. Nonequilibrium systems may violate this rule and several models, where phase coexistence occurs over a finite (n-dimensional) region of the parameter space, have been reported. The first example of this behaviour was found in Toom's model [Toom,Geoff,GG], that exhibits generic bistability, i.e. two-phase coexistence over a finite region of its two-dimensional parameter space (see Section 1). In addition to its interest as a genuine nonequilibrium property, generic multistability, defined as a generalization of bistability, is both of practical and theoretical relevance. In particular, it has been used recently to argue that some complex structures appearing in nature could be truly stable rather than metastable (with important applications in theoretical biology), and as the theoretical basis for an error-correction method in computer science (see [GG,Gacs] for an illuminating and pedagogical discussion of these ideas).Comment: 7 pages, 6 figures, to appear in Eur. Phys. J. B, svjour.cls and svepj.clo neede

    Entropy and Quantum Kolmogorov Complexity: A Quantum Brudno's Theorem

    Full text link
    In classical information theory, entropy rate and Kolmogorov complexity per symbol are related by a theorem of Brudno. In this paper, we prove a quantum version of this theorem, connecting the von Neumann entropy rate and two notions of quantum Kolmogorov complexity, both based on the shortest qubit descriptions of qubit strings that, run by a universal quantum Turing machine, reproduce them as outputs.Comment: 26 pages, no figures. Reference to publication added: published in the Communications in Mathematical Physics (http://www.springerlink.com/content/1432-0916/
    • 

    corecore