40,907 research outputs found

    Entanglement-assisted zero-error source-channel coding

    Get PDF
    We study the use of quantum entanglement in the zero-error source-channel coding problem. Here, Alice and Bob are connected by a noisy classical one-way channel, and are given correlated inputs from a random source. Their goal is for Bob to learn Alice's input while using the channel as little as possible. In the zero-error regime, the optimal rates of source codes and channel codes are given by graph parameters known as the Witsenhausen rate and Shannon capacity, respectively. The Lov\'asz theta number, a graph parameter defined by a semidefinite program, gives the best efficiently-computable upper bound on the Shannon capacity and it also upper bounds its entanglement-assisted counterpart. At the same time it was recently shown that the Shannon capacity can be increased if Alice and Bob may use entanglement. Here we partially extend these results to the source-coding problem and to the more general source-channel coding problem. We prove a lower bound on the rate of entanglement-assisted source-codes in terms Szegedy's number (a strengthening of the theta number). This result implies that the theta number lower bounds the entangled variant of the Witsenhausen rate. We also show that entanglement can allow for an unbounded improvement of the asymptotic rate of both classical source codes and classical source-channel codes. Our separation results use low-degree polynomials due to Barrington, Beigel and Rudich, Hadamard matrices due to Xia and Liu and a new application of remote state preparation.Comment: Title has been changed. Previous title was 'Zero-error source-channel coding with entanglement'. Corrected an error in Lemma 1.

    Information Spectrum Approach to the Source Channel Separation Theorem

    Full text link
    A source-channel separation theorem for a general channel has recently been shown by Aggrawal et. al. This theorem states that if there exist a coding scheme that achieves a maximum distortion level d_{max} over a general channel W, then reliable communication can be accomplished over this channel at rates less then R(d_{max}), where R(.) is the rate distortion function of the source. The source, however, is essentially constrained to be discrete and memoryless (DMS). In this work we prove a stronger claim where the source is general, satisfying only a "sphere packing optimality" feature, and the channel is completely general. Furthermore, we show that if the channel satisfies the strong converse property as define by Han & verdu, then the same statement can be made with d_{avg}, the average distortion level, replacing d_{max}. Unlike the proofs there, we use information spectrum methods to prove the statements and the results can be quite easily extended to other situations

    The infinite rate symbiotic branching model: from discrete to continuous space

    Get PDF
    The symbiotic branching model describes a spatial population consisting of two types that are allowed to migrate in space and branch locally only if both types are present. We continue our investigation of the large scale behaviour of the system started in Blath, Hammer and Ortgiese (2016), where we showed that the continuum system converges after diffusive rescaling. Inspired by a scaling property of the continuum model, a series of earlier works initiated by Klenke and Mytnik (2010, 2012) studied the model on a discrete space, but with infinite branching rate. In this paper, we bridge the gap between the two models by showing that by diffusively rescaling this discrete space infinite rate model, we obtain the continuum model from Blath, Hammer and Ortgiese (2016). As an application of this convergence result, we show that if we start the infinite rate system from complementary Heaviside initial conditions, the initial ordering of types is preserved in the limit and that the interface between the types consists of a single point.Comment: 36 pages, 1 figur

    Confidence sets in sparse regression

    Full text link
    The problem of constructing confidence sets in the high-dimensional linear model with nn response variables and pp parameters, possibly pnp\ge n, is considered. Full honest adaptive inference is possible if the rate of sparse estimation does not exceed n1/4n^{-1/4}, otherwise sparse adaptive confidence sets exist only over strict subsets of the parameter spaces for which sparse estimators exist. Necessary and sufficient conditions for the existence of confidence sets that adapt to a fixed sparsity level of the parameter vector are given in terms of minimal 2\ell^2-separation conditions on the parameter space. The design conditions cover common coherence assumptions used in models for sparsity, including (possibly correlated) sub-Gaussian designs.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1170 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore