1,073 research outputs found
On Unique Decodability
In this paper we propose a revisitation of the topic of unique decodability
and of some fundamental theorems of lossless coding. It is widely believed
that, for any discrete source X, every "uniquely decodable" block code
satisfies E[l(X_1 X_2 ... X_n)]>= H(X_1,X_2,...,X_n), where X_1, X_2,...,X_n
are the first n symbols of the source, E[l(X_1 X_2 ... X_n)] is the expected
length of the code for those symbols and H(X_1,X_2,...,X_n) is their joint
entropy. We show that, for certain sources with memory, the above inequality
only holds when a limiting definition of "uniquely decodable code" is
considered. In particular, the above inequality is usually assumed to hold for
any "practical code" due to a debatable application of McMillan's theorem to
sources with memory. We thus propose a clarification of the topic, also
providing an extended version of McMillan's theorem to be used for Markovian
sources.Comment: Accepted for publication, IEEE Transactions on Information Theor
Decodability Attack against the Fuzzy Commitment Scheme with Public Feature Transforms
The fuzzy commitment scheme is a cryptographic primitive that can be used to
store biometric templates being encoded as fixed-length feature vectors
protected. If multiple related records generated from the same biometric
instance can be intercepted, their correspondence can be determined using the
decodability attack. In 2011, Kelkboom et al. proposed to pass the feature
vectors through a record-specific but public permutation process in order to
prevent this attack. In this paper, it is shown that this countermeasure
enables another attack also analyzed by Simoens et al. in 2009 which can even
ease an adversary to fully break two related records. The attack may only be
feasible if the protected feature vectors have a reasonably small Hamming
distance; yet, implementations and security analyses must account for this
risk. This paper furthermore discusses that by means of a public
transformation, the attack cannot be prevented in a binary fuzzy commitment
scheme based on linear codes. Fortunately, such transformations can be
generated for the non-binary case. In order to still be able to protect binary
feature vectors, one may consider to use the improved fuzzy vault scheme by
Dodis et al. which may be secured against linkability attacks using
observations made by Merkle and Tams
Constellation Mapping for Physical-Layer Network Coding with M-QAM Modulation
The denoise-and-forward (DNF) method of physical-layer network coding (PNC)
is a promising approach for wireless relaying networks. In this paper, we
consider DNF-based PNC with M-ary quadrature amplitude modulation (M-QAM) and
propose a mapping scheme that maps the superposed M-QAM signal to coded
symbols. The mapping scheme supports both square and non-square M-QAM
modulations, with various original constellation mappings (e.g. binary-coded or
Gray-coded). Subsequently, we evaluate the symbol error rate and bit error rate
(BER) of M-QAM modulated PNC that uses the proposed mapping scheme. Afterwards,
as an application, a rate adaptation scheme for the DNF method of PNC is
proposed. Simulation results show that the rate-adaptive PNC is advantageous in
various scenarios.Comment: Final version at IEEE GLOBECOM 201
It'll probably work out: improved list-decoding through random operations
In this work, we introduce a framework to study the effect of random
operations on the combinatorial list-decodability of a code. The operations we
consider correspond to row and column operations on the matrix obtained from
the code by stacking the codewords together as columns. This captures many
natural transformations on codes, such as puncturing, folding, and taking
subcodes; we show that many such operations can improve the list-decoding
properties of a code. There are two main points to this. First, our goal is to
advance our (combinatorial) understanding of list-decodability, by
understanding what structure (or lack thereof) is necessary to obtain it.
Second, we use our more general results to obtain a few interesting corollaries
for list decoding:
(1) We show the existence of binary codes that are combinatorially
list-decodable from fraction of errors with optimal rate
that can be encoded in linear time.
(2) We show that any code with relative distance, when randomly
folded, is combinatorially list-decodable fraction of errors with
high probability. This formalizes the intuition for why the folding operation
has been successful in obtaining codes with optimal list decoding parameters;
previously, all arguments used algebraic methods and worked only with specific
codes.
(3) We show that any code which is list-decodable with suboptimal list sizes
has many subcodes which have near-optimal list sizes, while retaining the error
correcting capabilities of the original code. This generalizes recent results
where subspace evasive sets have been used to reduce list sizes of codes that
achieve list decoding capacity
- …