180,140 research outputs found

    Entropy of Highly Correlated Quantized Data

    Get PDF
    This paper considers the entropy of highly correlated quantized samples. Two results are shown. The first concerns sampling and identically scalar quantizing a stationary continuous-time random process over a finite interval. It is shown that if the process crosses a quantization threshold with positive probability, then the joint entropy of the quantized samples tends to infinity as the sampling rate goes to infinity. The second result provides an upper bound to the rate at which the joint entropy tends to infinity, in the case of an infinite-level uniform threshold scalar quantizer and a stationary Gaussian random process. Specifically, an asymptotic formula for the conditional entropy of one quantized sample conditioned on the previous quantized sample is derived. At high sampling rates, these results indicate a sharp contrast between the large encoding rate (in bits/sec) required by a lossy source code consisting of a fixed scalar quantizer and an ideal, sampling-rate-adapted lossless code, and the bounded encoding rate required by an ideal lossy source code operating at the same distortion

    The Ramsey Theory of Henson graphs

    Full text link
    Analogues of Ramsey's Theorem for infinite structures such as the rationals or the Rado graph have been known for some time. In this context, one looks for optimal bounds, called degrees, for the number of colors in an isomorphic substructure rather than one color, as that is often impossible. Such theorems for Henson graphs however remained elusive, due to lack of techniques for handling forbidden cliques. Building on the author's recent result for the triangle-free Henson graph, we prove that for each k4k\ge 4, the kk-clique-free Henson graph has finite big Ramsey degrees, the appropriate analogue of Ramsey's Theorem. We develop a method for coding copies of Henson graphs into a new class of trees, called strong coding trees, and prove Ramsey theorems for these trees which are applied to deduce finite big Ramsey degrees. The approach here provides a general methodology opening further study of big Ramsey degrees for ultrahomogeneous structures. The results have bearing on topological dynamics via work of Kechris, Pestov, and Todorcevic and of Zucker.Comment: 75 pages. Substantial revisions in the presentation. Submitte

    Space-time autocoding

    Get PDF
    Prior treatments of space-time communications in Rayleigh flat fading generally assume that channel coding covers either one fading interval-in which case there is a nonzero “outage capacity”-or multiple fading intervals-in which case there is a nonzero Shannon capacity. However, we establish conditions under which channel codes span only one fading interval and yet are arbitrarily reliable. In short, space-time signals are their own channel codes. We call this phenomenon space-time autocoding, and the accompanying capacity the space-time autocapacity. Let an M-transmitter antenna, N-receiver antenna Rayleigh flat fading channel be characterized by an M×N matrix of independent propagation coefficients, distributed as zero-mean, unit-variance complex Gaussian random variables. This propagation matrix is unknown to the transmitter, it remains constant during a T-symbol coherence interval, and there is a fixed total transmit power. Let the coherence interval and number of transmitter antennas be related as T=βM for some constant β. A T×M matrix-valued signal, associated with R·T bits of information for some rate R is transmitted during the T-symbol coherence interval. Then there is a positive space-time autocapacity Ca such that for all R<Ca, the block probability of error goes to zero as the pair (T, M)→∞ such that T/M=β. The autocoding effect occurs whether or not the propagation matrix is known to the receiver, and Ca=Nlog(1+ρ) in either case, independently of β, where ρ is the expected signal-to-noise ratio (SNR) at each receiver antenna. Lower bounds on the cutoff rate derived from random unitary space-time signals suggest that the autocoding effect manifests itself for relatively small values of T and M. For example, within a single coherence interval of duration T=16, for M=7 transmitter antennas and N=4 receiver antennas, and an 18-dB expected SNR, a total of 80 bits (corresponding to rate R=5) can theoretically be transmitted with a block probability of error less than 10^-9, all without any training or knowledge of the propagation matrix

    Lossless and near-lossless source coding for multiple access networks

    Get PDF
    A multiple access source code (MASC) is a source code designed for the following network configuration: a pair of correlated information sequences {X-i}(i=1)(infinity), and {Y-i}(i=1)(infinity) is drawn independent and identically distributed (i.i.d.) according to joint probability mass function (p.m.f.) p(x, y); the encoder for each source operates without knowledge of the other source; the decoder jointly decodes the encoded bit streams from both sources. The work of Slepian and Wolf describes all rates achievable by MASCs of infinite coding dimension (n --> infinity) and asymptotically negligible error probabilities (P-e((n)) --> 0). In this paper, we consider the properties of optimal instantaneous MASCs with finite coding dimension (n 0) performance. The interest in near-lossless codes is inspired by the discontinuity in the limiting rate region at P-e((n)) = 0 and the resulting performance benefits achievable by using near-lossless MASCs as entropy codes within lossy MASCs. Our central results include generalizations of Huffman and arithmetic codes to the MASC framework for arbitrary p(x, y), n, and P-e((n)) and polynomial-time design algorithms that approximate these optimal solutions
    corecore