44,196 research outputs found
On Linear Complexity of Finite Sequences : Coding Theory and Applications to Cryptography
We define two metrics on vector spaces over a finite field using the linear complexity of finite sequences. We then develop coding theory notions for these metrics and study their properties. We give a Singleton-like bound as well as constructions of subspaces achieving this bound. We also provide an asymptotic Gilbert-Varshamov-like bound for random subspaces. We show how to reduce the problem of finding codewords with given Hamming weight into a problem of finding a vector of a given linear complexity. This implies that our new metric can be used for cryptography in a similar way to what is currently done in the code-based setting
Quickest Sequence Phase Detection
A phase detection sequence is a length- cyclic sequence, such that the
location of any length- contiguous subsequence can be determined from a
noisy observation of that subsequence. In this paper, we derive bounds on the
minimal possible in the limit of , and describe some sequence
constructions. We further consider multiple phase detection sequences, where
the location of any length- contiguous subsequence of each sequence can be
determined simultaneously from a noisy mixture of those subsequences. We study
the optimal trade-offs between the lengths of the sequences, and describe some
sequence constructions. We compare these phase detection problems to their
natural channel coding counterparts, and show a strict separation between the
fundamental limits in the multiple sequence case. Both adversarial and
probabilistic noise models are addressed.Comment: To appear in the IEEE Transactions on Information Theor
Maximum-order Complexity and Correlation Measures
We estimate the maximum-order complexity of a binary sequence in terms of its
correlation measures. Roughly speaking, we show that any sequence with small
correlation measure up to a sufficiently large order cannot have very small
maximum-order complexity
Lossy compression of discrete sources via Viterbi algorithm
We present a new lossy compressor for discrete-valued sources. For coding a
sequence , the encoder starts by assigning a certain cost to each possible
reconstruction sequence. It then finds the one that minimizes this cost and
describes it losslessly to the decoder via a universal lossless compressor. The
cost of each sequence is a linear combination of its distance from the sequence
and a linear function of its order empirical distribution.
The structure of the cost function allows the encoder to employ the Viterbi
algorithm to recover the minimizer of the cost. We identify a choice of the
coefficients comprising the linear function of the empirical distribution used
in the cost function which ensures that the algorithm universally achieves the
optimum rate-distortion performance of any stationary ergodic source in the
limit of large , provided that diverges as . Iterative
techniques for approximating the coefficients, which alleviate the
computational burden of finding the optimal coefficients, are proposed and
studied.Comment: 26 pages, 6 figures, Submitted to IEEE Transactions on Information
Theor
- …