308 research outputs found
On the Peak-to-Mean Envelope Power Ratio of Phase-Shifted Binary Codes
The peak-to-mean envelope power ratio (PMEPR) of a code employed in
orthogonal frequency-division multiplexing (OFDM) systems can be reduced by
permuting its coordinates and by rotating each coordinate by a fixed phase
shift. Motivated by some previous designs of phase shifts using suboptimal
methods, the following question is considered in this paper. For a given binary
code, how much PMEPR reduction can be achieved when the phase shifts are taken
from a 2^h-ary phase-shift keying (2^h-PSK) constellation? A lower bound on the
achievable PMEPR is established, which is related to the covering radius of the
binary code. Generally speaking, the achievable region of the PMEPR shrinks as
the covering radius of the binary code decreases. The bound is then applied to
some well understood codes, including nonredundant BPSK signaling, BCH codes
and their duals, Reed-Muller codes, and convolutional codes. It is demonstrated
that most (presumably not optimal) phase-shift designs from the literature
attain or approach our bound.Comment: minor revisions, accepted for IEEE Trans. Commun
A CRC usefulness assessment for adaptation layers in satellite systems
This paper assesses the real usefulness of CRCs in today's satellite network-to-link adaptation layers under the lights of enhanced error control and framing techniques, focusing on the DVB-S and DVB-S2 standards. Indeed, the outer block codes of their FEC schemes (Reed-Solomon and BCH, respectively) can provide very accurate error-detection information to the receiver in addition to their correction capabilities, at virtually no cost. This handy feature could be used to manage on a frame-by-frame basis what CRCs do locally, on the frames' contents, saving the bandwidth and processing load associated with them, and paving the way for enhanced transport of IP over DVB-S2. Mathematical and experimental results clearly show that if FEC has been properly congured for combined error correction and detection, having an uncorrected event after FEC decoding is likely to be an extremely improbable event. Under such conditions, it seems possible and attractive to optimize the way global error-control is done over satellite links by reducing the role of CRCs, or even by removing them from the overall encapsulation process
ON THE PROPERTIES AND COMPLEXITY OF MULTICOVERING RADII
People rely on the ability to transmit information over channels of communication that aresubject to noise and interference. This makes the ability to detect and recover from errorsextremely important. Coding theory addresses this need for reliability. A fundamentalquestion of coding theory is whether and how we can correct the errors in a message thathas been subjected to interference. One answer comes from structures known as errorcorrecting codes.A well studied parameter associated with a code is its covering radius. The coveringradius of a code is the smallest radius such that every vector in the Hamming space of thecode is contained in a ball of that radius centered around some codeword. Covering radiusrelates to an important decoding strategy known as nearest neighbor decoding.The multicovering radius is a generalization of the covering radius that was proposed byKlapper [11] in the course of studying stream ciphers. In this work we develop techniques forfinding the multicovering radius of specific codes. In particular, we study the even weightcode, the 2-error correcting BCH code, and linear codes with covering radius one.We also study questions involving the complexity of finding the multicovering radius ofcodes. We show: Lower bounding the m-covering radius of an arbitrary binary code is NPcompletewhen m is polynomial in the length of the code. Lower bounding the m-coveringradius of a linear code is ĂÂŁp2-complete when m is polynomial in the length of the code. IfP is not equal to NP, then the m-covering radius of an arbitrary binary code cannot beapproximated within a constant factor or within a factor nĂ”, where n is the length of thecode and Ă” andlt; 1, in polynomial time. Note that the case when m = 1 was also previouslyunknown. If NP is not equal to ĂÂŁp2, then the m-covering radius of a linear code cannot beapproximated within a constant factor or within a factor nĂ”, where n is the length of thecode and Ă” andlt; 1, in polynomial time
Ensuring message embedding in wet paper steganography
International audienceSyndrome coding has been proposed by Crandall in 1998 as a method to stealthily embed a message in a cover-medium through the use of bounded decoding. In 2005, Fridrich et al. introduced wet paper codes to improve the undetectability of the embedding by nabling the sender to lock some components of the cover-data, according to the nature of the cover-medium and the message. Unfortunately, almost all existing methods solving the bounded decoding syndrome problem with or without locked components have a non-zero probability to fail. In this paper, we introduce a randomized syndrome coding, which guarantees the embedding success with probability one. We analyze the parameters of this new scheme in the case of perfect codes
Diameter, Covering Index, Covering Radius and Eigenvalues
AbstractFan Chung has recently derived an upper bound on the diameter of a regular graph as a function of the second largest eigenvalue in absolute value. We generalize this bound to the case of bipartite biregular graphs, and regular directed graphs.We also observe the connection with the primitivity exponent of the adjacency matrix. This applies directly to the covering number of Finite Non Abelian Simple Groups (FINASIG). We generalize this latter problem to primitive association schemes, such as the conjugacy scheme of Paige's simple loop.By noticing that the covering radius of a linear code is the diameter of a Cayley graph on the cosets, we derive an upper bound on the covering radius of a code as a function of the scattering of the weights of the dual code. When the code has even weights, we obtain a bound on the covering radius as a function of the dual distance dl which is tighter, for d℠large enough, than the recent bounds of TietÀvÀinen
Covering Radius 1985-1994
We survey important developments in the theory of covering radius during the period 1985-1994. We present lower bounds, constructions and upper bounds, the linear and nonlinear cases, density and asymptotic results, normality, specific classes of codes, covering radius and dual distance, tables, and open problems
Two-batch liar games on a general bounded channel
We consider an extension of the 2-person R\'enyi-Ulam liar game in which lies
are governed by a channel , a set of allowable lie strings of maximum length
. Carole selects , and Paul makes -ary queries to uniquely
determine . In each of rounds, Paul weakly partitions and asks for such that . Carole responds with some
, and if , then accumulates a lie . Carole's string of
lies for must be in the channel . Paul wins if he determines within
rounds. We further restrict Paul to ask his questions in two off-line
batches. We show that for a range of sizes of the second batch, the maximum
size of the search space for which Paul can guarantee finding the
distinguished element is as ,
where is the number of lie strings in of maximum length . This
generalizes previous work of Dumitriu and Spencer, and of Ahlswede, Cicalese,
and Deppe. We extend Paul's strategy to solve also the pathological liar
variant, in a unified manner which gives the existence of asymptotically
perfect two-batch adaptive codes for the channel .Comment: 26 page
Improved Decoding of Staircase Codes: The Soft-aided Bit-marking (SABM) Algorithm
Staircase codes (SCCs) are typically decoded using iterative bounded-distance
decoding (BDD) and hard decisions. In this paper, a novel decoding algorithm is
proposed, which partially uses soft information from the channel. The proposed
algorithm is based on marking certain number of highly reliable and highly
unreliable bits. These marked bits are used to improve the
miscorrection-detection capability of the SCC decoder and the error-correcting
capability of BDD. For SCCs with -error-correcting
Bose-Chaudhuri-Hocquenghem component codes, our algorithm improves upon
standard SCC decoding by up to ~dB at a bit-error rate (BER) of
. The proposed algorithm is shown to achieve almost half of the gain
achievable by an idealized decoder with this structure. A complexity analysis
based on the number of additional calls to the component BDD decoder shows that
the relative complexity increase is only around at a BER of .
This additional complexity is shown to decrease as the channel quality
improves. Our algorithm is also extended (with minor modifications) to product
codes. The simulation results show that in this case, the algorithm offers
gains of up to ~dB at a BER of .Comment: 10 pages, 12 figure
Additive Asymmetric Quantum Codes
We present a general construction of asymmetric quantum codes based on
additive codes under the trace Hermitian inner product. Various families of
additive codes over \F_{4} are used in the construction of many asymmetric
quantum codes over \F_{4}.Comment: Accepted for publication March 2, 2011, IEEE Transactions on
Information Theory, to appea
- âŠ