1,203 research outputs found

    Quantum Reverse Shannon Theorem

    Get PDF
    Dual to the usual noisy channel coding problem, where a noisy (classical or quantum) channel is used to simulate a noiseless one, reverse Shannon theorems concern the use of noiseless channels to simulate noisy ones, and more generally the use of one noisy channel to simulate another. For channels of nonzero capacity, this simulation is always possible, but for it to be efficient, auxiliary resources of the proper kind and amount are generally required. In the classical case, shared randomness between sender and receiver is a sufficient auxiliary resource, regardless of the nature of the source, but in the quantum case the requisite auxiliary resources for efficient simulation depend on both the channel being simulated, and the source from which the channel inputs are coming. For tensor power sources (the quantum generalization of classical IID sources), entanglement in the form of standard ebits (maximally entangled pairs of qubits) is sufficient, but for general sources, which may be arbitrarily correlated or entangled across channel inputs, additional resources, such as entanglement-embezzling states or backward communication, are generally needed. Combining existing and new results, we establish the amounts of communication and auxiliary resources needed in both the classical and quantum cases, the tradeoffs among them, and the loss of simulation efficiency when auxiliary resources are absent or insufficient. In particular we find a new single-letter expression for the excess forward communication cost of coherent feedback simulations of quantum channels (i.e. simulations in which the sender retains what would escape into the environment in an ordinary simulation), on non-tensor-power sources in the presence of unlimited ebits but no other auxiliary resource. Our results on tensor power sources establish a strong converse to the entanglement-assisted capacity theorem.Comment: 35 pages, to appear in IEEE-IT. v2 has a fixed proof of the Clueless Eve result, a new single-letter formula for the "spread deficit", better error scaling, and an improved strong converse. v3 and v4 each make small improvements to the presentation and add references. v5 fixes broken reference

    Wiretap and Gelfand-Pinsker Channels Analogy and its Applications

    Full text link
    An analogy framework between wiretap channels (WTCs) and state-dependent point-to-point channels with non-causal encoder channel state information (referred to as Gelfand-Pinker channels (GPCs)) is proposed. A good sequence of stealth-wiretap codes is shown to induce a good sequence of codes for a corresponding GPC. Consequently, the framework enables exploiting existing results for GPCs to produce converse proofs for their wiretap analogs. The analogy readily extends to multiuser broadcasting scenarios, encompassing broadcast channels (BCs) with deterministic components, degradation ordering between users, and BCs with cooperative receivers. Given a wiretap BC (WTBC) with two receivers and one eavesdropper, an analogous Gelfand-Pinsker BC (GPBC) is constructed by converting the eavesdropper's observation sequence into a state sequence with an appropriate product distribution (induced by the stealth-wiretap code for the WTBC), and non-causally revealing the states to the encoder. The transition matrix of the state-dependent GPBC is extracted from WTBC's transition law, with the eavesdropper's output playing the role of the channel state. Past capacity results for the semi-deterministic (SD) GPBC and the physically-degraded (PD) GPBC with an informed receiver are leveraged to furnish analogy-based converse proofs for the analogous WTBC setups. This characterizes the secrecy-capacity regions of the SD-WTBC and the PD-WTBC, in which the stronger receiver also observes the eavesdropper's channel output. These derivations exemplify how the wiretap-GP analogy enables translating results on one problem into advances in the study of the other

    Learning from compressed observations

    Full text link
    The problem of statistical learning is to construct a predictor of a random variable YY as a function of a related random variable XX on the basis of an i.i.d. training sample from the joint distribution of (X,Y)(X,Y). Allowable predictors are drawn from some specified class, and the goal is to approach asymptotically the performance (expected loss) of the best predictor in the class. We consider the setting in which one has perfect observation of the XX-part of the sample, while the YY-part has to be communicated at some finite bit rate. The encoding of the YY-values is allowed to depend on the XX-values. Under suitable regularity conditions on the admissible predictors, the underlying family of probability distributions and the loss function, we give an information-theoretic characterization of achievable predictor performance in terms of conditional distortion-rate functions. The ideas are illustrated on the example of nonparametric regression in Gaussian noise.Comment: 6 pages; submitted to the 2007 IEEE Information Theory Workshop (ITW 2007

    Critical Behavior in Lossy Source Coding

    Full text link
    The following critical phenomenon was recently discovered. When a memoryless source is compressed using a variable-length fixed-distortion code, the fastest convergence rate of the (pointwise) compression ratio to the optimal R(D)R(D) bits/symbol is either O(n)O(\sqrt{n}) or O(logn)O(\log n). We show it is always O(n)O(\sqrt{n}), except for discrete, uniformly distributed sources.Comment: 2 figure
    corecore