312 research outputs found
Variations of the McEliece Cryptosystem
Two variations of the McEliece cryptosystem are presented. The first one is
based on a relaxation of the column permutation in the classical McEliece
scrambling process. This is done in such a way that the Hamming weight of the
error, added in the encryption process, can be controlled so that efficient
decryption remains possible. The second variation is based on the use of
spatially coupled moderate-density parity-check codes as secret codes. These
codes are known for their excellent error-correction performance and allow for
a relatively low key size in the cryptosystem. For both variants the security
with respect to known attacks is discussed
Update-Efficiency and Local Repairability Limits for Capacity Approaching Codes
Motivated by distributed storage applications, we investigate the degree to
which capacity achieving encodings can be efficiently updated when a single
information bit changes, and the degree to which such encodings can be
efficiently (i.e., locally) repaired when single encoded bit is lost.
Specifically, we first develop conditions under which optimum
error-correction and update-efficiency are possible, and establish that the
number of encoded bits that must change in response to a change in a single
information bit must scale logarithmically in the block-length of the code if
we are to achieve any nontrivial rate with vanishing probability of error over
the binary erasure or binary symmetric channels. Moreover, we show there exist
capacity-achieving codes with this scaling.
With respect to local repairability, we develop tight upper and lower bounds
on the number of remaining encoded bits that are needed to recover a single
lost bit of the encoding. In particular, we show that if the code-rate is
less than the capacity, then for optimal codes, the maximum number
of codeword symbols required to recover one lost symbol must scale as
.
Several variations on---and extensions of---these results are also developed.Comment: Accepted to appear in JSA
Novel Code-Construction for (3, k) Regular Low Density Parity Check Codes
Communication system links that do not have the ability to retransmit generally rely
on forward error correction (FEC) techniques that make use of error correcting codes
(ECC) to detect and correct errors caused by the noise in the channel. There are
several ECC’s in the literature that are used for the purpose. Among them, the low
density parity check (LDPC) codes have become quite popular owing to the fact that
they exhibit performance that is closest to the Shannon’s limit.
This thesis proposes a novel code-construction method for constructing not only (3, k)
regular but also irregular LDPC codes. The choice of designing (3, k) regular LDPC
codes is made because it has low decoding complexity and has a Hamming distance,
at least, 4. In this work, the proposed code-construction consists of information submatrix
(Hinf) and an almost lower triangular parity sub-matrix (Hpar). The core design
of the proposed code-construction utilizes expanded deterministic base matrices in
three stages. Deterministic base matrix of parity part starts with triple diagonal matrix
while deterministic base matrix of information part utilizes matrix having all elements
of ones. The proposed matrix H is designed to generate various code rates (R) by
maintaining the number of rows in matrix H while only changing the number of
columns in matrix Hinf.
All the codes designed and presented in this thesis are having no rank-deficiency, no
pre-processing step of encoding, no singular nature in parity part (Hpar), no girth of
4-cycles and low encoding complexity of the order of (N + g2) where g2«N. The
proposed (3, k) regular codes are shown to achieve code performance below 1.44 dB
from Shannon limit at bit error rate (BER) of 10
−6
when the code rate greater than
R = 0.875. They have comparable BER and block error rate (BLER) performance
with other techniques such as (3, k) regular quasi-cyclic (QC) and (3, k) regular
random LDPC codes when code rates are at least R = 0.7. In addition, it is also shown
that the proposed (3, 42) regular LDPC code performs as close as 0.97 dB from
Shannon limit at BER 10
−6
with encoding complexity (1.0225 N), for R = 0.928 and
N = 14364 – a result that no other published techniques can reach
Homological Product Codes
Quantum codes with low-weight stabilizers known as LDPC codes have been
actively studied recently due to their simple syndrome readout circuits and
potential applications in fault-tolerant quantum computing. However, all
families of quantum LDPC codes known to this date suffer from a poor distance
scaling limited by the square-root of the code length. This is in a sharp
contrast with the classical case where good families of LDPC codes are known
that combine constant encoding rate and linear distance. Here we propose the
first family of good quantum codes with low-weight stabilizers. The new codes
have a constant encoding rate, linear distance, and stabilizers acting on at
most qubits, where is the code length. For comparison, all
previously known families of good quantum codes have stabilizers of linear
weight. Our proof combines two techniques: randomized constructions of good
quantum codes and the homological product operation from algebraic topology. We
conjecture that similar methods can produce good stabilizer codes with
stabilizer weight for any . Finally, we apply the homological
product to construct new small codes with low-weight stabilizers.Comment: 49 page
- …