2,299 research outputs found
On formulas for decoding binary cyclic codes
We adress the problem of the algebraic decoding of any cyclic code up to the
true minimum distance. For this, we use the classical formulation of the
problem, which is to find the error locator polynomial in terms of the syndroms
of the received word. This is usually done with the Berlekamp-Massey algorithm
in the case of BCH codes and related codes, but for the general case, there is
no generic algorithm to decode cyclic codes. Even in the case of the quadratic
residue codes, which are good codes with a very strong algebraic structure,
there is no available general decoding algorithm. For this particular case of
quadratic residue codes, several authors have worked out, by hand, formulas for
the coefficients of the locator polynomial in terms of the syndroms, using the
Newton identities. This work has to be done for each particular quadratic
residue code, and is more and more difficult as the length is growing.
Furthermore, it is error-prone. We propose to automate these computations,
using elimination theory and Grbner bases. We prove that, by computing
appropriate Grbner bases, one automatically recovers formulas for the
coefficients of the locator polynomial, in terms of the syndroms
Worst case QC-MDPC decoder for McEliece cryptosystem
McEliece encryption scheme which enjoys relatively small key sizes as well as
a security reduction to hard problems of coding theory. Furthermore, it remains
secure against a quantum adversary and is very well suited to low cost
implementations on embedded devices.
Decoding MDPC codes is achieved with the (iterative) bit flipping algorithm,
as for LDPC codes. Variable time decoders might leak some information on the
code structure (that is on the sparse parity check equations) and must be
avoided. A constant time decoder is easy to emulate, but its running time
depends on the worst case rather than on the average case. So far
implementations were focused on minimizing the average cost. We show that the
tuning of the algorithm is not the same to reduce the maximal number of
iterations as for reducing the average cost. This provides some indications on
how to engineer the QC-MDPC-McEliece scheme to resist a timing side-channel
attack.Comment: 5 pages, conference ISIT 201
Countably Infinite Multilevel Source Polarization for Non-Stationary Erasure Distributions
Polar transforms are central operations in the study of polar codes. This
paper examines polar transforms for non-stationary memoryless sources on
possibly infinite source alphabets. This is the first attempt of source
polarization analysis over infinite alphabets. The source alphabet is defined
to be a Polish group, and we handle the Ar{\i}kan-style two-by-two polar
transform based on the group. Defining erasure distributions based on the
normal subgroup structure, we give recursive formulas of the polar transform
for our proposed erasure distributions. As a result, the recursive formulas
lead to concrete examples of multilevel source polarization with countably
infinite levels when the group is locally cyclic. We derive this result via
elementary techniques in lattice theory.Comment: 12 pages, 1 figure, a short version has been accepted by the 2019
IEEE International Symposium on Information Theory (ISIT2019
Codeword-Independent Performance of Nonbinary Linear Codes Under Linear-Programming and Sum-Product Decoding
A coded modulation system is considered in which nonbinary coded symbols are
mapped directly to nonbinary modulation signals. It is proved that if the
modulator-channel combination satisfies a particular symmetry condition, the
codeword error rate performance is independent of the transmitted codeword. It
is shown that this result holds for both linear-programming decoders and
sum-product decoders. In particular, this provides a natural modulation mapping
for nonbinary codes mapped to PSK constellations for transmission over
memoryless channels such as AWGN channels or flat fading channels with AWGN.Comment: 5 pages, Proceedings of the 2008 IEEE International Symposium on
Information Theory, Toronto, ON, Canada, July 6-11, 200
LEDAkem: a post-quantum key encapsulation mechanism based on QC-LDPC codes
This work presents a new code-based key encapsulation mechanism (KEM) called
LEDAkem. It is built on the Niederreiter cryptosystem and relies on
quasi-cyclic low-density parity-check codes as secret codes, providing high
decoding speeds and compact keypairs. LEDAkem uses ephemeral keys to foil known
statistical attacks, and takes advantage of a new decoding algorithm that
provides faster decoding than the classical bit-flipping decoder commonly
adopted in this kind of systems. The main attacks against LEDAkem are
investigated, taking into account quantum speedups. Some instances of LEDAkem
are designed to achieve different security levels against classical and quantum
computers. Some performance figures obtained through an efficient C99
implementation of LEDAkem are provided.Comment: 21 pages, 3 table
Adaptively correcting quantum errors with entanglement
Contrary to the assumption that most quantum error-correcting codes (QECC)
make, it is expected that phase errors are much more likely than bit errors in
physical devices. By employing the entanglement-assisted stabilizer formalism,
we develop a new kind of error-correcting protocol which can flexibly trade
error correction abilities between the two types of errors, such that high
error correction performance is achieved both in symmetric and in asymmetric
situations. The characteristics of the QECCs can be optimized in an adaptive
manner during information transmission. The proposed entanglement-assisted
QECCs require only one ebit regardless of the degree of asymmetry at a given
moment and can be decoded in polynomial time.Comment: 5 pages, final submission to ISIT 2011, Saint-Petersburg, Russi
- …