4,963 research outputs found

    Extracting the Kolmogorov Complexity of Strings and Sequences from Sources with Limited Independence

    Get PDF
    An infinite binary sequence has randomness rate at least σ\sigma if, for almost every nn, the Kolmogorov complexity of its prefix of length nn is at least σn\sigma n. It is known that for every rational σ(0,1)\sigma \in (0,1), on one hand, there exists sequences with randomness rate σ\sigma that can not be effectively transformed into a sequence with randomness rate higher than σ\sigma and, on the other hand, any two independent sequences with randomness rate σ\sigma can be transformed into a sequence with randomness rate higher than σ\sigma. We show that the latter result holds even if the two input sequences have linear dependency (which, informally speaking, means that all prefixes of length nn of the two sequences have in common a constant fraction of their information). The similar problem is studied for finite strings. It is shown that from any two strings with sufficiently large Kolmogorov complexity and sufficiently small dependence, one can effectively construct a string that is random even conditioned by any one of the input strings

    Constructive Dimension and Turing Degrees

    Full text link
    This paper examines the constructive Hausdorff and packing dimensions of Turing degrees. The main result is that every infinite sequence S with constructive Hausdorff dimension dim_H(S) and constructive packing dimension dim_P(S) is Turing equivalent to a sequence R with dim_H(R) <= (dim_H(S) / dim_P(S)) - epsilon, for arbitrary epsilon > 0. Furthermore, if dim_P(S) > 0, then dim_P(R) >= 1 - epsilon. The reduction thus serves as a *randomness extractor* that increases the algorithmic randomness of S, as measured by constructive dimension. A number of applications of this result shed new light on the constructive dimensions of Turing degrees. A lower bound of dim_H(S) / dim_P(S) is shown to hold for the Turing degree of any sequence S. A new proof is given of a previously-known zero-one law for the constructive packing dimension of Turing degrees. It is also shown that, for any regular sequence S (that is, dim_H(S) = dim_P(S)) such that dim_H(S) > 0, the Turing degree of S has constructive Hausdorff and packing dimension equal to 1. Finally, it is shown that no single Turing reduction can be a universal constructive Hausdorff dimension extractor, and that bounded Turing reductions cannot extract constructive Hausdorff dimension. We also exhibit sequences on which weak truth-table and bounded Turing reductions differ in their ability to extract dimension.Comment: The version of this paper appearing in Theory of Computing Systems, 45(4):740-755, 2009, had an error in the proof of Theorem 2.4, due to insufficient care with the choice of delta. This version modifies that proof to fix the error

    Impossibility of independence amplification in Kolmogorov complexity theory

    Full text link
    The paper studies randomness extraction from sources with bounded independence and the issue of independence amplification of sources, using the framework of Kolmogorov complexity. The dependency of strings xx and yy is dep(x,y)=max{C(x)C(xy),C(y)C(yx)}{\rm dep}(x,y) = \max\{C(x) - C(x \mid y), C(y) - C(y\mid x)\}, where C()C(\cdot) denotes the Kolmogorov complexity. It is shown that there exists a computable Kolmogorov extractor ff such that, for any two nn-bit strings with complexity s(n)s(n) and dependency α(n)\alpha(n), it outputs a string of length s(n)s(n) with complexity s(n)α(n)s(n)- \alpha(n) conditioned by any one of the input strings. It is proven that the above are the optimal parameters a Kolmogorov extractor can achieve. It is shown that independence amplification cannot be effectively realized. Specifically, if (after excluding a trivial case) there exist computable functions f1f_1 and f2f_2 such that dep(f1(x,y),f2(x,y))β(n){\rm dep}(f_1(x,y), f_2(x,y)) \leq \beta(n) for all nn-bit strings xx and yy with dep(x,y)α(n){\rm dep}(x,y) \leq \alpha(n), then β(n)α(n)O(logn)\beta(n) \geq \alpha(n) - O(\log n)

    Thermodynamic Depth of Causal States: When Paddling around in Occam's Pool Shallowness Is a Virtue

    Get PDF
    Thermodynamic depth is an appealing but flawed structural complexity measure. It depends on a set of macroscopic states for a system, but neither its original introduction by Lloyd and Pagels nor any follow-up work has considered how to select these states. Depth, therefore, is at root arbitrary. Computational mechanics, an alternative approach to structural complexity, provides a definition for a system's minimal, necessary causal states and a procedure for finding them. We show that the rate of increase in thermodynamic depth, or {\it dive}, is the system's reverse-time Shannon entropy rate, and so depth only measures degrees of macroscopic randomness, not structure. To fix this we redefine the depth in terms of the causal state representation---ϵ\epsilon-machines---and show that this representation gives the minimum dive consistent with accurate prediction. Thus, ϵ\epsilon-machines are optimally shallow.Comment: 11 pages, 9 figures, RevTe

    Entropy and Quantum Kolmogorov Complexity: A Quantum Brudno's Theorem

    Full text link
    In classical information theory, entropy rate and Kolmogorov complexity per symbol are related by a theorem of Brudno. In this paper, we prove a quantum version of this theorem, connecting the von Neumann entropy rate and two notions of quantum Kolmogorov complexity, both based on the shortest qubit descriptions of qubit strings that, run by a universal quantum Turing machine, reproduce them as outputs.Comment: 26 pages, no figures. Reference to publication added: published in the Communications in Mathematical Physics (http://www.springerlink.com/content/1432-0916/

    Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation

    Full text link
    We study the compressed sensing (CS) signal estimation problem where an input signal is measured via a linear matrix multiplication under additive noise. While this setup usually assumes sparsity or compressibility in the input signal during recovery, the signal structure that can be leveraged is often not known a priori. In this paper, we consider universal CS recovery, where the statistics of a stationary ergodic signal source are estimated simultaneously with the signal itself. Inspired by Kolmogorov complexity and minimum description length, we focus on a maximum a posteriori (MAP) estimation framework that leverages universal priors to match the complexity of the source. Our framework can also be applied to general linear inverse problems where more measurements than in CS might be needed. We provide theoretical results that support the algorithmic feasibility of universal MAP estimation using a Markov chain Monte Carlo implementation, which is computationally challenging. We incorporate some techniques to accelerate the algorithm while providing comparable and in many cases better reconstruction quality than existing algorithms. Experimental results show the promise of universality in CS, particularly for low-complexity sources that do not exhibit standard sparsity or compressibility.Comment: 29 pages, 8 figure
    corecore