5,108 research outputs found
Extracting the Kolmogorov Complexity of Strings and Sequences from Sources with Limited Independence
An infinite binary sequence has randomness rate at least if, for
almost every , the Kolmogorov complexity of its prefix of length is at
least . It is known that for every rational , on
one hand, there exists sequences with randomness rate that can not be
effectively transformed into a sequence with randomness rate higher than
and, on the other hand, any two independent sequences with randomness
rate can be transformed into a sequence with randomness rate higher
than . We show that the latter result holds even if the two input
sequences have linear dependency (which, informally speaking, means that all
prefixes of length of the two sequences have in common a constant fraction
of their information). The similar problem is studied for finite strings. It is
shown that from any two strings with sufficiently large Kolmogorov complexity
and sufficiently small dependence, one can effectively construct a string that
is random even conditioned by any one of the input strings
Constructive Dimension and Turing Degrees
This paper examines the constructive Hausdorff and packing dimensions of
Turing degrees. The main result is that every infinite sequence S with
constructive Hausdorff dimension dim_H(S) and constructive packing dimension
dim_P(S) is Turing equivalent to a sequence R with dim_H(R) <= (dim_H(S) /
dim_P(S)) - epsilon, for arbitrary epsilon > 0. Furthermore, if dim_P(S) > 0,
then dim_P(R) >= 1 - epsilon. The reduction thus serves as a *randomness
extractor* that increases the algorithmic randomness of S, as measured by
constructive dimension.
A number of applications of this result shed new light on the constructive
dimensions of Turing degrees. A lower bound of dim_H(S) / dim_P(S) is shown to
hold for the Turing degree of any sequence S. A new proof is given of a
previously-known zero-one law for the constructive packing dimension of Turing
degrees. It is also shown that, for any regular sequence S (that is, dim_H(S) =
dim_P(S)) such that dim_H(S) > 0, the Turing degree of S has constructive
Hausdorff and packing dimension equal to 1.
Finally, it is shown that no single Turing reduction can be a universal
constructive Hausdorff dimension extractor, and that bounded Turing reductions
cannot extract constructive Hausdorff dimension. We also exhibit sequences on
which weak truth-table and bounded Turing reductions differ in their ability to
extract dimension.Comment: The version of this paper appearing in Theory of Computing Systems,
45(4):740-755, 2009, had an error in the proof of Theorem 2.4, due to
insufficient care with the choice of delta. This version modifies that proof
to fix the error
Impossibility of independence amplification in Kolmogorov complexity theory
The paper studies randomness extraction from sources with bounded
independence and the issue of independence amplification of sources, using the
framework of Kolmogorov complexity. The dependency of strings and is
, where
denotes the Kolmogorov complexity. It is shown that there exists a
computable Kolmogorov extractor such that, for any two -bit strings with
complexity and dependency , it outputs a string of length
with complexity conditioned by any one of the input
strings. It is proven that the above are the optimal parameters a Kolmogorov
extractor can achieve. It is shown that independence amplification cannot be
effectively realized. Specifically, if (after excluding a trivial case) there
exist computable functions and such that for all -bit strings and with , then
Thermodynamic Depth of Causal States: When Paddling around in Occam's Pool Shallowness Is a Virtue
Thermodynamic depth is an appealing but flawed structural complexity measure.
It depends on a set of macroscopic states for a system, but neither its
original introduction by Lloyd and Pagels nor any follow-up work has considered
how to select these states. Depth, therefore, is at root arbitrary.
Computational mechanics, an alternative approach to structural complexity,
provides a definition for a system's minimal, necessary causal states and a
procedure for finding them. We show that the rate of increase in thermodynamic
depth, or {\it dive}, is the system's reverse-time Shannon entropy rate, and so
depth only measures degrees of macroscopic randomness, not structure. To fix
this we redefine the depth in terms of the causal state
representation----machines---and show that this representation gives
the minimum dive consistent with accurate prediction. Thus, -machines
are optimally shallow.Comment: 11 pages, 9 figures, RevTe
Entropy and Quantum Kolmogorov Complexity: A Quantum Brudno's Theorem
In classical information theory, entropy rate and Kolmogorov complexity per
symbol are related by a theorem of Brudno. In this paper, we prove a quantum
version of this theorem, connecting the von Neumann entropy rate and two
notions of quantum Kolmogorov complexity, both based on the shortest qubit
descriptions of qubit strings that, run by a universal quantum Turing machine,
reproduce them as outputs.Comment: 26 pages, no figures. Reference to publication added: published in
the Communications in Mathematical Physics
(http://www.springerlink.com/content/1432-0916/
Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.Comment: 29 pages, 8 figure
- …