2,347 research outputs found

    A Closed-Form Shave from Occam's Quantum Razor: Exact Results for Quantum Compression

    Full text link
    The causal structure of a stochastic process can be more efficiently transmitted via a quantum channel than a classical one, an advantage that increases with codeword length. While previously difficult to compute, we express the quantum advantage in closed form using spectral decomposition, leading to direct computation of the quantum communication cost at all encoding lengths, including infinite. This makes clear how finite-codeword compression is controlled by the classical process' cryptic order and allows us to analyze structure within the length-asymptotic regime of infinite-cryptic order (and infinite Markov order) processes.Comment: 21 pages, 13 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/eqc.ht

    Fast performance estimation of block codes

    Get PDF
    Importance sampling is used in this paper to address the classical yet important problem of performance estimation of block codes. Simulation distributions that comprise discreteand continuous-mixture probability densities are motivated and used for this application. These mixtures are employed in concert with the so-called g-method, which is a conditional importance sampling technique that more effectively exploits knowledge of underlying input distributions. For performance estimation, the emphasis is on bit by bit maximum a-posteriori probability decoding, but message passing algorithms for certain codes have also been investigated. Considered here are single parity check codes, multidimensional product codes, and briefly, low-density parity-check codes. Several error rate results are presented for these various codes, together with performances of the simulation techniques

    Hybrid Coding Technique for Pulse Detection in an Optical Time Domain Reflectometer

    Get PDF
    The paper introduces a novel hybrid coding technique for improved pulse detection in an optical time domain reflectometer. The hybrid schemes combines Simplex codes with signal averaging to articulate a very sophisticated coding technique that considerably reduces the processing time to extract specified coding gains in comparison to the existing techniques. The paper quantifies the coding gain of the hybrid scheme mathematically and provide simulative results in direct agreement with the theoretical performance. Furthermore, the hybrid scheme has been tested on our self-developed OTDR

    Design and Analysis of Nonbinary LDPC Codes for Arbitrary Discrete-Memoryless Channels

    Full text link
    We present an analysis, under iterative decoding, of coset LDPC codes over GF(q), designed for use over arbitrary discrete-memoryless channels (particularly nonbinary and asymmetric channels). We use a random-coset analysis to produce an effect that is similar to output-symmetry with binary channels. We show that the random selection of the nonzero elements of the GF(q) parity-check matrix induces a permutation-invariance property on the densities of the decoder messages, which simplifies their analysis and approximation. We generalize several properties, including symmetry and stability from the analysis of binary LDPC codes. We show that under a Gaussian approximation, the entire q-1 dimensional distribution of the vector messages is described by a single scalar parameter (like the distributions of binary LDPC messages). We apply this property to develop EXIT charts for our codes. We use appropriately designed signal constellations to obtain substantial shaping gains. Simulation results indicate that our codes outperform multilevel codes at short block lengths. We also present simulation results for the AWGN channel, including results within 0.56 dB of the unconstrained Shannon limit (i.e. not restricted to any signal constellation) at a spectral efficiency of 6 bits/s/Hz.Comment: To appear, IEEE Transactions on Information Theory, (submitted October 2004, revised and accepted for publication, November 2005). The material in this paper was presented in part at the 41st Allerton Conference on Communications, Control and Computing, October 2003 and at the 2005 IEEE International Symposium on Information Theor

    Tight Bounds on the R\'enyi Entropy via Majorization with Applications to Guessing and Compression

    Full text link
    This paper provides tight bounds on the R\'enyi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one-to-one. To that end, a tight lower bound on the R\'enyi entropy of a discrete random variable with a finite support is derived as a function of the size of the support, and the ratio of the maximal to minimal probability masses. This work was inspired by the recently published paper by Cicalese et al., which is focused on the Shannon entropy, and it strengthens and generalizes the results of that paper to R\'enyi entropies of arbitrary positive orders. In view of these generalized bounds and the works by Arikan and Campbell, non-asymptotic bounds are derived for guessing moments and lossless data compression of discrete memoryless sources.Comment: The paper was published in the Entropy journal (special issue on Probabilistic Methods in Information Theory, Hypothesis Testing, and Coding), vol. 20, no. 12, paper no. 896, November 22, 2018. Online available at https://www.mdpi.com/1099-4300/20/12/89

    Authentication of Satellite Navigation Signals by Wiretap Coding and Artificial Noise

    Full text link
    In order to combat the spoofing of global navigation satellite system (GNSS) signals we propose a novel approach for satellite signal authentication based on information-theoretic security. In particular we superimpose to the navigation signal an authentication signal containing a secret message corrupted by artificial noise (AN), still transmitted by the satellite. We impose the following properties: a) the authentication signal is synchronous with the navigation signal, b) the authentication signal is orthogonal to the navigation signal and c) the secret message is undecodable by the attacker due to the presence of the AN. The legitimate receiver synchronizes with the navigation signal and stores the samples of the authentication signal with the same synchronization. After the transmission of the authentication signal, through a separate public asynchronous authenticated channel (e.g., a secure Internet connection) additional information is made public allowing the receiver to a) decode the secret message, thus overcoming the effects of AN, and b) verify the secret message. We assess the performance of the proposed scheme by the analysis of both the secrecy capacity of the authentication message and the attack success probability, under various attack scenarios. A comparison with existing approaches shows the effectiveness of the proposed scheme

    Tight and simple Web graph compression

    Full text link
    Analysing Web graphs has applications in determining page ranks, fighting Web spam, detecting communities and mirror sites, and more. This study is however hampered by the necessity of storing a major part of huge graphs in the external memory, which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly but also providing random access. Those techniques are usually based on differential encodings of the adjacency lists, finding repeating nodes or node regions in the successive lists, more general grammar-based transformations or 2-dimensional representations of the binary matrix of the graph. In this paper we present two Web graph compression algorithms. The first can be seen as engineering of the Boldi and Vigna (2004) method. We extend the notion of similarity between link lists, and use a more compact encoding of residuals. The algorithm works on blocks of varying size (in the number of input lines) and sacrifices access time for better compression ratio, achieving more succinct graph representation than other algorithms reported in the literature. The second algorithm works on blocks of the same size, in the number of input lines, and its key mechanism is merging the block into a single ordered list. This method achieves much more attractive space-time tradeoffs.Comment: 15 page
    corecore