150 research outputs found

    Linear complexity universal decoding with exponential error probability decay

    Get PDF
    In this manuscript we consider linear complexity binary linear block encoders and decoders that operate universally with exponential error probability decay. Such scenarios may be relevant in wireless scenarios where probability distributions may not be fully characterized due to the dynamic nature of wireless environments. More specifically, we consider the setting of fixed length-to-fixed length near-lossless data compression of a memoryless binary source of unknown probability distribution as well as the dual setting of communicating on a binary symmetric channel (BSC) with unknown crossover probability. We introduce a new 'min-max distance' metric, analogous to minimum distance, that addresses the universal binary setting and has the same properties as that of minimum distance on BSCs with known crossover probability. The code construction and decoding algorithm are universal extensions of the 'expander codes' framework of Barg and Zemor and have identical complexity and exponential error probability performance

    Towards practical minimum-entropy universal decoding

    Get PDF
    Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance

    Time-sharing vs. source-splitting in the Slepian-Wolf problem: error exponents analysis

    Get PDF
    We discuss two approaches for decoding at arbitrary rates in the Slepian-Wolf problem - time sharing and source splitting - both of which rely on constituent vertex decoders. We consider the error exponents for both schemes and conclude that source-splitting is more robust at coding at arbitrary rates, as the error exponent for time-sharing degrades significantly at rates near vertices. As a by-product of our analysis, we exhibit an interesting connection between minimum mean-squared error estimation and error exponents

    Rate-splitting for the deterministic broadcast channel

    Get PDF
    We show that the deterministic broadcast channel, where a single source transmits to M receivers across a deterministic mechanism, may be reduced, via a rate-splitting transformation, to another (2M−1)-receiver deterministic broadcast channel problem where a successive encoding approach suffices. Analogous to rate-splitting for the multiple access channel and source-splitting for the Slepian-Wolf problem, all achievable rates (including non-vertices) apply. This amounts to significant complexity reduction at the encoder

    On some new approaches to practical Slepian-Wolf compression inspired by channel coding

    Get PDF
    This paper considers the problem, first introduced by Ahlswede and Körner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and Körner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for the optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y

    Low-Complexity Approaches to Slepian–Wolf Near-Lossless Distributed Data Compression

    Get PDF
    This paper discusses the Slepian–Wolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “source-splitting” strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian–Wolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the “min-sum” iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable “expander”-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance

    Fabrication and Optical Properties of a Fully Hybrid Epitaxial ZnO-Based Microcavity in the Strong Coupling Regime

    Full text link
    In order to achieve polariton lasing at room temperature, a new fabrication methodology for planar microcavities is proposed: a ZnO-based microcavity in which the active region is epitaxially grown on an AlGaN/AlN/Si substrate and in which two dielectric mirrors are used. This approach allows as to simultaneously obtain a high-quality active layer together with a high photonic confinement as demonstrated through macro-, and micro-photoluminescence ({\mu}-PL) and reflectivity experiments. A quality factor of 675 and a maximum PL emission at k=0 are evidenced thanks to {\mu}-PL, revealing an efficient polaritonic relaxation even at low excitation power.Comment: 12 pages, 3 figure

    Forgot Your Password: Correlation Dilution

    Get PDF
    Abstract-We consider the problem of diluting common randomness from correlated observations by separated agents. This problem creates a new framework to study statistical privacy, in which a legitimate party, Alice, has access to a random variable X, whereas an attacker, Bob, has access to a random variable Y dependent on X drawn from a joint distribution pX,Y . Alice's goal is to produce a non-trivial function of her available information that is uncorrelated with (has small correlation with) any function that Bob can produce based on his available information. This problem naturally admits a minimax formulation where Alice plays first and Bob follows her. We define dilution coefficient as the smallest value of correlation achieved by the best strategy available to Alice, and characterize it in terms of the minimum principal inertia components of the joint probability distribution pX,Y . We then explicitly find the optimal function that Alice must choose to achieve this limit. We also establish a connection between differential privacy and dilution coefficient and show that if Y is -differentially private from X, then dilution coefficient can be upper bounded in terms of . Finally, we extend to the setting where Alice and Bob have access to i.i.d. copies of (Xi, Yi), i = 1, . . . , n and show that the dilution coefficient vanishes exponentially with n. In other words, Alice can achieve better privacy as the number of her observations grows

    LO-phonon assisted polariton lasing in a ZnO based microcavity

    Full text link
    Polariton relaxation mechanisms are analysed experimentally and theoretically in a ZnO-based polariton laser. A minimum lasing threshold is obtained when the energy difference between the exciton reservoir and the bottom of the lower polariton branch is resonant with the LO phonon energy. Tuning off this resonance increases the threshold, and exciton-exciton scattering processes become involved in the polariton relaxation. These observations are qualitatively reproduced by simulations based on the numerical solution of the semi-classical Boltzmann equations
    corecore