131,357 research outputs found
Recommended from our members
Parallel data compression
Data compression schemes remove data redundancy in communicated and stored data and increase the effective capacities of communication and storage devices. Parallel algorithms and implementations for textual data compression are surveyed. Related concepts from parallel computation and information theory are briefly discussed. Static and dynamic methods for codeword construction and transmission on various models of parallel computation are described. Included are parallel methods which boost system speed by coding data concurrently, and approaches which employ multiple compression techniques to improve compression ratios. Theoretical and empirical comparisons are reported and areas for future research are suggested
Cryptographic error correction
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (leaves 67-71).It has been said that "cryptography is about concealing information, and coding theory is about revealing it." Despite these apparently conflicting goals, the two fields have common origins and many interesting relationships. In this thesis, we establish new connections between cryptography and coding theory in two ways: first, by applying cryptographic tools to solve classical problems from the theory of error correction; and second, by studying special kinds of codes that are motivated by cryptographic applications. In the first part of this thesis, we consider a model of error correction in which the source of errors is adversarial, but limited to feasible computation. In this model, we construct appealingly simple, general, and efficient cryptographic coding schemes which can recover from much larger error rates than schemes for classical models of adversarial noise. In the second part, we study collusion-secure fingerprinting codes, which are of fundamental importance in cryptographic applications like data watermarking and traitor tracing. We demonstrate tight lower bounds on the lengths of such codes by devising and analyzing a general collusive attack that works for any code.by Christopher Jason Peikert.Ph.D
Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability
Previously referred to as `miraculous' in the scientific literature because
of its powerful properties and its wide application as optimal solution to the
problem of induction/inference, (approximations to) Algorithmic Probability
(AP) and the associated Universal Distribution are (or should be) of the
greatest importance in science. Here we investigate the emergence, the rates of
emergence and convergence, and the Coding-theorem like behaviour of AP in
Turing-subuniversal models of computation. We investigate empirical
distributions of computing models in the Chomsky hierarchy. We introduce
measures of algorithmic probability and algorithmic complexity based upon
resource-bounded computation, in contrast to previously thoroughly investigated
distributions produced from the output distribution of Turing machines. This
approach allows for numerical approximations to algorithmic
(Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a
computational hierarchy. We demonstrate that all these estimations are
correlated in rank and that they converge both in rank and values as a function
of computational power, despite fundamental differences between computational
models. In the context of natural processes that operate below the Turing
universal level because of finite resources and physical degradation, the
investigation of natural biases stemming from algorithmic rules may shed light
on the distribution of outcomes. We show that up to 60\% of the
simplicity/complexity bias in distributions produced even by the weakest of the
computational models can be accounted for by Algorithmic Probability in its
approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity
calculator: http://complexitycalculator.com
Computation in Multicast Networks: Function Alignment and Converse Theorems
The classical problem in network coding theory considers communication over
multicast networks. Multiple transmitters send independent messages to multiple
receivers which decode the same set of messages. In this work, computation over
multicast networks is considered: each receiver decodes an identical function
of the original messages. For a countably infinite class of two-transmitter
two-receiver single-hop linear deterministic networks, the computing capacity
is characterized for a linear function (modulo-2 sum) of Bernoulli sources.
Inspired by the geometric concept of interference alignment in networks, a new
achievable coding scheme called function alignment is introduced. A new
converse theorem is established that is tighter than cut-set based and
genie-aided bounds. Computation (vs. communication) over multicast networks
requires additional analysis to account for multiple receivers sharing a
network's computational resources. We also develop a network decomposition
theorem which identifies elementary parallel subnetworks that can constitute an
original network without loss of optimality. The decomposition theorem provides
a conceptually-simpler algebraic proof of achievability that generalizes to
-transmitter -receiver networks.Comment: to appear in the IEEE Transactions on Information Theor
Compute-and-Forward: Harnessing Interference through Structured Codes
Interference is usually viewed as an obstacle to communication in wireless
networks. This paper proposes a new strategy, compute-and-forward, that
exploits interference to obtain significantly higher rates between users in a
network. The key idea is that relays should decode linear functions of
transmitted messages according to their observed channel coefficients rather
than ignoring the interference as noise. After decoding these linear equations,
the relays simply send them towards the destinations, which given enough
equations, can recover their desired messages. The underlying codes are based
on nested lattices whose algebraic structure ensures that integer combinations
of codewords can be decoded reliably. Encoders map messages from a finite field
to a lattice and decoders recover equations of lattice points which are then
mapped back to equations over the finite field. This scheme is applicable even
if the transmitters lack channel state information.Comment: IEEE Trans. Info Theory, to appear. 23 pages, 13 figure
- …