12,297 research outputs found
Shannon Information and Kolmogorov Complexity
We compare the elementary theories of Shannon information and Kolmogorov
complexity, the extent to which they have a common purpose, and where they are
fundamentally different. We discuss and relate the basic notions of both
theories: Shannon entropy versus Kolmogorov complexity, the relation of both to
universal coding, Shannon mutual information versus Kolmogorov (`algorithmic')
mutual information, probabilistic sufficient statistic versus algorithmic
sufficient statistic (related to lossy compression in the Shannon theory versus
meaningful information in the Kolmogorov theory), and rate distortion theory
versus Kolmogorov's structure function. Part of the material has appeared in
print before, scattered through various publications, but this is the first
comprehensive systematic comparison. The last mentioned relations are new.Comment: Survey, LaTeX 54 pages, 3 figures, Submitted to IEEE Trans
Information Theor
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
The relationship between the Bayesian approach and the minimum description
length approach is established. We sharpen and clarify the general modeling
principles MDL and MML, abstracted as the ideal MDL principle and defined from
Bayes's rule by means of Kolmogorov complexity. The basic condition under which
the ideal principle should be applied is encapsulated as the Fundamental
Inequality, which in broad terms states that the principle is valid when the
data are random, relative to every contemplated hypothesis and also these
hypotheses are random relative to the (universal) prior. Basically, the ideal
principle states that the prior probability associated with the hypothesis
should be given by the algorithmic universal probability, and the sum of the
log universal probability of the model plus the log of the probability of the
data given the model should be minimized. If we restrict the model class to the
finite sets then application of the ideal principle turns into Kolmogorov's
minimal sufficient statistic. In general we show that data compression is
almost always the best strategy, both in hypothesis identification and
prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor
The statistical mechanics of turbo codes
The "turbo codes", recently proposed by Berrou et. al. are written as a
disordered spin Hamiltonian. It is shown that there is a threshold Theta such
that for signal to noise ratios v^2 / w^2 > Theta, the error probability per
bit vanishes in the thermodynamic limit, i.e. the limit of infinitly long
sequences. The value of the threshold has been computed for two particular
turbo codes. It is found that it depends on the code. These results are
compared with numerical simulations.Comment: 23 pages, 6 figures: Fig.2 has been replaced (in the preceding
version it was identical to Fig.1
Entanglement-assisted quantum turbo codes
An unexpected breakdown in the existing theory of quantum serial turbo coding
is that a quantum convolutional encoder cannot simultaneously be recursive and
non-catastrophic. These properties are essential for quantum turbo code
families to have a minimum distance growing with blocklength and for their
iterative decoding algorithm to converge, respectively. Here, we show that the
entanglement-assisted paradigm simplifies the theory of quantum turbo codes, in
the sense that an entanglement-assisted quantum (EAQ) convolutional encoder can
possess both of the aforementioned desirable properties. We give several
examples of EAQ convolutional encoders that are both recursive and
non-catastrophic and detail their relevant parameters. We then modify the
quantum turbo decoding algorithm of Poulin et al., in order to have the
constituent decoders pass along only "extrinsic information" to each other
rather than a posteriori probabilities as in the decoder of Poulin et al., and
this leads to a significant improvement in the performance of unassisted
quantum turbo codes. Other simulation results indicate that
entanglement-assisted turbo codes can operate reliably in a noise regime 4.73
dB beyond that of standard quantum turbo codes, when used on a memoryless
depolarizing channel. Furthermore, several of our quantum turbo codes are
within 1 dB or less of their hashing limits, so that the performance of quantum
turbo codes is now on par with that of classical turbo codes. Finally, we prove
that entanglement is the resource that enables a convolutional encoder to be
both non-catastrophic and recursive because an encoder acting on only
information qubits, classical bits, gauge qubits, and ancilla qubits cannot
simultaneously satisfy them.Comment: 31 pages, software for simulating EA turbo codes is available at
http://code.google.com/p/ea-turbo/ and a presentation is available at
http://markwilde.com/publications/10-10-EA-Turbo.ppt ; v2, revisions based on
feedback from journal; v3, modification of the quantum turbo decoding
algorithm that leads to improved performance over results in v2 and the
results of Poulin et al. in arXiv:0712.288
Coding theorems for turbo code ensembles
This paper is devoted to a Shannon-theoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are "good" in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a positive number γ0 such that for any binary-input memoryless channel whose Bhattacharyya noise parameter is less than γ0, the average maximum-likelihood (ML) decoder block error probability approaches zero, at least as fast as n -β, where β is the "interleaver gain" exponent defined by Benedetto et al. in 1996
The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy
The principle of maximum entropy (Maxent) is often used to obtain prior
probability distributions as a method to obtain a Gibbs measure under some
restriction giving the probability that a system will be in a certain state
compared to the rest of the elements in the distribution. Because classical
entropy-based Maxent collapses cases confounding all distinct degrees of
randomness and pseudo-randomness, here we take into consideration the
generative mechanism of the systems considered in the ensemble to separate
objects that may comply with the principle under some restriction and whose
entropy is maximal but may be generated recursively from those that are
actually algorithmically random offering a refinement to classical Maxent. We
take advantage of a causal algorithmic calculus to derive a thermodynamic-like
result based on how difficult it is to reprogram a computer code. Using the
distinction between computable and algorithmic randomness we quantify the cost
in information loss associated with reprogramming. To illustrate this we apply
the algorithmic refinement to Maxent on graphs and introduce a Maximal
Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a
generalisation over previous approaches. We discuss practical implications of
evaluation of network randomness. Our analysis provides insight in that the
reprogrammability asymmetry appears to originate from a non-monotonic
relationship to algorithmic probability. Our analysis motivates further
analysis of the origin and consequences of the aforementioned asymmetries,
reprogrammability, and computation.Comment: 30 page
- …