1,559 research outputs found
Error Correcting Coding for a Non-symmetric Ternary Channel
Ternary channels can be used to model the behavior of some memory devices,
where information is stored in three different levels. In this paper, error
correcting coding for a ternary channel where some of the error transitions are
not allowed, is considered. The resulting channel is non-symmetric, therefore
classical linear codes are not optimal for this channel. We define the
maximum-likelihood (ML) decoding rule for ternary codes over this channel and
show that it is complex to compute, since it depends on the channel error
probability. A simpler alternative decoding rule which depends only on code
properties, called \da-decoding, is then proposed. It is shown that
\da-decoding and ML decoding are equivalent, i.e., \da-decoding is optimal,
under certain conditions. Assuming \da-decoding, we characterize the error
correcting capabilities of ternary codes over the non-symmetric ternary
channel. We also derive an upper bound and a constructive lower bound on the
size of codes, given the code length and the minimum distance. The results
arising from the constructive lower bound are then compared, for short sizes,
to optimal codes (in terms of code size) found by a clique-based search. It is
shown that the proposed construction method gives good codes, and that in some
cases the codes are optimal.Comment: Submitted to IEEE Transactions on Information Theory. Part of this
work was presented at the Information Theory and Applications Workshop 200
Codes for Asymmetric Limited-Magnitude Errors With Application to Multilevel Flash Memories
Several physical effects that limit the reliability and performance of multilevel flash memories induce errors that have low magnitudes and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions and bounds for such channels when the number of errors is bounded by t and the error magnitudes are bounded by â. The constructions utilize known codes for symmetric errors, over small alphabets, to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. Moreover, the size of the codes is shown to exceed the sizes of known codes (for related error models), and asymptotic rate-optimality results are proved. Extensions of the construction are proposed to accommodate variations on the error model and to include systematic codes as a benefit to practical implementation
Symmetric Disjunctive List-Decoding Codes
A binary code is said to be a disjunctive list-decoding -code (LD
-code), , , if the code is identified by the incidence
matrix of a family of finite sets in which the union (or disjunctive sum) of
any sets can cover not more than other sets of the family. In this
paper, we consider a similar class of binary codes which are based on a {\em
symmetric disjunctive sum} (SDS) of binary symbols. By definition, the
symmetric disjunctive sum (SDS) takes values from the ternary alphabet , where the symbol~ denotes "erasure". Namely: SDS is equal to ()
if all its binary symbols are equal to (), otherwise SDS is equal
to~. List decoding codes for symmetric disjunctive sum are said to be {\em
symmetric disjunctive list-decoding -codes} (SLD -codes). In the
given paper, we remind some applications of SLD -codes which motivate the
concept of symmetric disjunctive sum. We refine the known relations between
parameters of LD -codes and SLD -codes. For the ensemble of binary
constant-weight codes we develop a random coding method to obtain lower bounds
on the rate of these codes. Our lower bounds improve the known random coding
bounds obtained up to now using the ensemble with independent symbols of
codewords.Comment: 18 pages, 1 figure, 1 table, conference pape
Entropic bounds on coding for noisy quantum channels
In analogy with its classical counterpart, a noisy quantum channel is
characterized by a loss, a quantity that depends on the channel input and the
quantum operation performed by the channel. The loss reflects the transmission
quality: if the loss is zero, quantum information can be perfectly transmitted
at a rate measured by the quantum source entropy. By using block coding based
on sequences of n entangled symbols, the average loss (defined as the overall
loss of the joint n-symbol channel divided by n, when n tends to infinity) can
be made lower than the loss for a single use of the channel. In this context,
we examine several upper bounds on the rate at which quantum information can be
transmitted reliably via a noisy channel, that is, with an asymptotically
vanishing average loss while the one-symbol loss of the channel is non-zero.
These bounds on the channel capacity rely on the entropic Singleton bound on
quantum error-correcting codes [Phys. Rev. A 56, 1721 (1997)]. Finally, we
analyze the Singleton bounds when the noisy quantum channel is supplemented
with a classical auxiliary channel.Comment: 20 pages RevTeX, 10 Postscript figures. Expanded Section II, added 1
figure, changed title. To appear in Phys. Rev. A (May 98
New binary and ternary LCD codes
LCD codes are linear codes with important cryptographic applications.
Recently, a method has been presented to transform any linear code into an LCD
code with the same parameters when it is supported on a finite field with
cardinality larger than 3. Hence, the study of LCD codes is mainly open for
binary and ternary fields. Subfield-subcodes of -affine variety codes are a
generalization of BCH codes which have been successfully used for constructing
good quantum codes. We describe binary and ternary LCD codes constructed as
subfield-subcodes of -affine variety codes and provide some new and good LCD
codes coming from this construction
- âŠ