70,279 research outputs found

    Empirical processes, typical sequences and coordinated actions in standard Borel spaces

    Full text link
    This paper proposes a new notion of typical sequences on a wide class of abstract alphabets (so-called standard Borel spaces), which is based on approximations of memoryless sources by empirical distributions uniformly over a class of measurable "test functions." In the finite-alphabet case, we can take all uniformly bounded functions and recover the usual notion of strong typicality (or typicality under the total variation distance). For a general alphabet, however, this function class turns out to be too large, and must be restricted. With this in mind, we define typicality with respect to any Glivenko-Cantelli function class (i.e., a function class that admits a Uniform Law of Large Numbers) and demonstrate its power by giving simple derivations of the fundamental limits on the achievable rates in several source coding scenarios, in which the relevant operational criteria pertain to reproducing empirical averages of a general-alphabet stationary memoryless source with respect to a suitable function class.Comment: 14 pages, 3 pdf figures; accepted to IEEE Transactions on Information Theor

    On the effect of quantization on performance at high rates

    Get PDF
    We study the effect of quantization on the performance of a scalar dynamical system in the high rate regime. We evaluate the LQ cost for two commonly used quantizers: uniform and logarithmic and provide a lower bound on performance of any centroid-based quantizer based on entropy arguments. We also consider the case when the channel drops data packets stochastically

    Multiple Description Coding of Discrete Ergodic Sources

    Get PDF
    We investigate the problem of Multiple Description (MD) coding of discrete ergodic processes. We introduce the notion of MD stationary coding, and characterize its relationship to the conventional block MD coding. In stationary coding, in addition to the two rate constraints normally considered in the MD problem, we consider another rate constraint which reflects the conditional entropy of the process generated by the third decoder given the reconstructions of the two other decoders. The relationship that we establish between stationary and block MD coding enables us to devise a universal algorithm for MD coding of discrete ergodic sources, based on simulated annealing ideas that were recently proven useful for the standard rate distortion problem.Comment: 6 pages, 3 figures, presented at 2009 Allerton Conference on Communication, Control and Computin

    On privacy amplification, lossy compression, and their duality to channel coding

    Full text link
    We examine the task of privacy amplification from information-theoretic and coding-theoretic points of view. In the former, we give a one-shot characterization of the optimal rate of privacy amplification against classical adversaries in terms of the optimal type-II error in asymmetric hypothesis testing. This formulation can be easily computed to give finite-blocklength bounds and turns out to be equivalent to smooth min-entropy bounds by Renner and Wolf [Asiacrypt 2005] and Watanabe and Hayashi [ISIT 2013], as well as a bound in terms of the EγE_\gamma divergence by Yang, Schaefer, and Poor [arXiv:1706.03866 [cs.IT]]. In the latter, we show that protocols for privacy amplification based on linear codes can be easily repurposed for channel simulation. Combined with known relations between channel simulation and lossy source coding, this implies that privacy amplification can be understood as a basic primitive for both channel simulation and lossy compression. Applied to symmetric channels or lossy compression settings, our construction leads to proto- cols of optimal rate in the asymptotic i.i.d. limit. Finally, appealing to the notion of channel duality recently detailed by us in [IEEE Trans. Info. Theory 64, 577 (2018)], we show that linear error-correcting codes for symmetric channels with quantum output can be transformed into linear lossy source coding schemes for classical variables arising from the dual channel. This explains a "curious duality" in these problems for the (self-dual) erasure channel observed by Martinian and Yedidia [Allerton 2003; arXiv:cs/0408008] and partly anticipates recent results on optimal lossy compression by polar and low-density generator matrix codes.Comment: v3: updated to include equivalence of the converse bound with smooth entropy formulations. v2: updated to include comparison with the one-shot bounds of arXiv:1706.03866. v1: 11 pages, 4 figure

    Culture and generalized inattentional blindness

    Get PDF
    A recent mathematical treatment of Baars' Global Workspace consciousness model, much in the spirit of Dretske's communication theory analysis of high level mental function, is used to study the effects of embedding cultural heritage on a generalized form of inattentional blindness. Culture should express itself quite distinctly in this basic psychophysical phenomenon, acting across a variety of sensory and other modalities, because the limited syntactic and grammatical 'bandpass' of the topological rate distortion manifold characterizing conscious attention is itself strongly sculpted by the constraints of cultural context

    Lossy Source Coding via Spatially Coupled LDGM Ensembles

    Full text link
    We study a new encoding scheme for lossy source compression based on spatially coupled low-density generator-matrix codes. We develop a belief-propagation guided-decimation algorithm, and show that this algorithm allows to approach the optimal distortion of spatially coupled ensembles. Moreover, using the survey propagation formalism, we also observe that the optimal distortions of the spatially coupled and individual code ensembles are the same. Since regular low-density generator-matrix codes are known to achieve the Shannon rate-distortion bound under optimal encoding as the degrees grow, our results suggest that spatial coupling can be used to reach the rate-distortion bound, under a {\it low complexity} belief-propagation guided-decimation algorithm. This problem is analogous to the MAX-XORSAT problem in computer science.Comment: Submitted to ISIT 201
    corecore