96 research outputs found
An improvement of the Griesmer bound for some small minimum distances
AbstractIn this paper we give some lower and upper bounds for the smallest length n(k, d) of a binary linear code with dimension k and minimum distance d. The lower bounds improve the known ones for small d. In the last section we summarize what we know about n(8, d)
Primal-dual distance bounds of linear codes with application to cryptography
Let denote the minimum length of a linear code with
and , where is the minimum Hamming distance of and
is the minimum Hamming distance of . In this paper, we
show a lower bound and an upper bound on . Further, for small
values of and , we determine and give a generator
matrix of the optimum linear code. This problem is directly related to the
design method of cryptographic Boolean functions suggested by Kurosawa et al.Comment: 6 pages, using IEEEtran.cls. To appear in IEEE Trans. Inform. Theory,
Sept. 2006. Two authors were added in the revised versio
Update-Efficiency and Local Repairability Limits for Capacity Approaching Codes
Motivated by distributed storage applications, we investigate the degree to
which capacity achieving encodings can be efficiently updated when a single
information bit changes, and the degree to which such encodings can be
efficiently (i.e., locally) repaired when single encoded bit is lost.
Specifically, we first develop conditions under which optimum
error-correction and update-efficiency are possible, and establish that the
number of encoded bits that must change in response to a change in a single
information bit must scale logarithmically in the block-length of the code if
we are to achieve any nontrivial rate with vanishing probability of error over
the binary erasure or binary symmetric channels. Moreover, we show there exist
capacity-achieving codes with this scaling.
With respect to local repairability, we develop tight upper and lower bounds
on the number of remaining encoded bits that are needed to recover a single
lost bit of the encoding. In particular, we show that if the code-rate is
less than the capacity, then for optimal codes, the maximum number
of codeword symbols required to recover one lost symbol must scale as
.
Several variations on---and extensions of---these results are also developed.Comment: Accepted to appear in JSA
Construction of asymptotically good low-rate error-correcting codes through pseudo-random graphs
A novel technique, based on the pseudo-random properties of certain graphs known as expanders, is used to obtain novel simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling, and then regrouping the code coordinates. For any fixed (small) rate, and for a sufficiently large alphabet, the codes thus obtained lie above the Zyablov bound. Using these codes as outer codes in a concatenated scheme, a second asymptotic good construction is obtained which applies to small alphabets (say, GF(2)) as well. Although these concatenated codes lie below the Zyablov bound, they are still superior to previously known explicit constructions in the zero-rate neighborhood
- …