59 research outputs found

    A refined analysis of the Poisson channel in the high-photon-efficiency regime

    Get PDF
    We study the discrete-time Poisson channel under the constraint that its average input power (in photons per channel use) must not exceed some constant E. We consider the wideband, high-photon-efficiency extreme where E approaches zero, and where the channel's "dark current" approaches zero proportionally with E. Improving over a previously obtained first-order capacity approximation, we derive a refined approximation, which includes the exact characterization of the second-order term, as well as an asymptotic characterization of the third-order term with respect to the dark current. We also show that pulse-position modulation is nearly optimal in this regime.Comment: Revised version to appear in IEEE Transactions on Information Theor

    A Formula for the Capacity of the General Gel'fand-Pinsker Channel

    Full text link
    We consider the Gel'fand-Pinsker problem in which the channel and state are general, i.e., possibly non-stationary, non-memoryless and non-ergodic. Using the information spectrum method and a non-trivial modification of the piggyback coding lemma by Wyner, we prove that the capacity can be expressed as an optimization over the difference of a spectral inf- and a spectral sup-mutual information rate. We consider various specializations including the case where the channel and state are memoryless but not necessarily stationary.Comment: Accepted to the IEEE Transactions on Communication

    INFORMATION THEORETIC SECRET KEY GENERATION: STRUCTURED CODES AND TREE PACKING

    Get PDF
    This dissertation deals with a multiterminal source model for secret key generation by multiple network terminals with prior and privileged access to a set of correlated signals complemented by public discussion among themselves. Emphasis is placed on a characterization of secret key capacity, i.e., the largest rate of an achievable secret key, and on algorithms for key construction. Various information theoretic security requirements of increasing stringency: weak, strong and perfect secrecy, as well as different types of sources: finite-valued and continuous, are studied. Specifically, three different models are investigated. First, we consider strong secrecy generation for a discrete multiterminal source model. We discover a connection between secret key capacity and a new source coding concept of ``minimum information rate for signal dissemination,'' that is of independent interest in multiterminal data compression. Our main contribution is to show for this discrete model that structured linear codes suffice to generate a strong secret key of the best rate. Second, strong secrecy generation is considered for models with continuous observations, in particular jointly Gaussian signals. In the absence of suitable analogs of source coding notions for the previous discrete model, new techniques are required for a characterization of secret key capacity as well as for the design of algorithms for secret key generation. Our proof of the secret key capacity result, in particular the converse proof, as well as our capacity-achieving algorithms for secret key construction based on structured codes and quantization for a model with two terminals, constitute the two main contributions for this second model. Last, we turn our attention to perfect secrecy generation for fixed signal observation lengths as well as for their asymptotic limits. In contrast with the analysis of the previous two models that relies on probabilistic techniques, perfect secret key generation bears the essence of ``zero-error information theory,'' and accordingly, we rely on mathematical techniques of a combinatorial nature. The model under consideration is the ``Pairwise Independent Network'' (PIN) model in which every pair of terminals share a random binary string, with the strings shared by distinct pairs of terminals being mutually independent. This model, which is motivated by practical aspects of a wireless communication network in which terminals communicate on the same frequency, results in three main contributions. First, the concept of perfect omniscience in data compression leads to a single-letter formula for the perfect secret key capacity of the PIN model; moreover, this capacity is shown to be achieved by linear noninteractive public communication, and coincides with strong secret key capacity. Second, taking advantage of a multigraph representation of the PIN model, we put forth an efficient algorithm for perfect secret key generation based on a combinatorial concept of maximal packing of Steiner trees of the multigraph. When all the terminals seek to share perfect secrecy, the algorithm is shown to achieve capacity. When only a subset of terminals wish to share perfect secrecy, the algorithm is shown to achieve at least half of it. Additionally, we obtain nonasymptotic and asymptotic bounds on the size and rate of the best perfect secret key generated by the algorithm. These bounds are of independent interest from a purely graph theoretic viewpoint as they constitute new estimates for the maximum size and rate of Steiner tree packing of a given multigraph. Third, a particular configuration of the PIN model arises when a lone ``helper'' terminal aids all the other ``user'' terminals generate perfect secrecy. This model has special features that enable us to obtain necessary and sufficient conditions for Steiner tree packing to achieve perfect secret key capacity

    Improved Finite Blocklength Converses for Slepian-Wolf Coding via Linear Programming

    Full text link
    A new finite blocklength converse for the Slepian- Wolf coding problem is presented which significantly improves on the best known converse for this problem, due to Miyake and Kanaya [2]. To obtain this converse, an extension of the linear programming (LP) based framework for finite blocklength point- to-point coding problems from [3] is employed. However, a direct application of this framework demands a complicated analysis for the Slepian-Wolf problem. An analytically simpler approach is presented wherein LP-based finite blocklength converses for this problem are synthesized from point-to-point lossless source coding problems with perfect side-information at the decoder. New finite blocklength metaconverses for these point-to-point problems are derived by employing the LP-based framework, and the new converse for Slepian-Wolf coding is obtained by an appropriate combination of these converses.Comment: under review with the IEEE Transactions on Information Theor

    Data Compression with Low Distortion and Finite Blocklength

    Get PDF
    This paper considers lossy source coding of n-dimensional memoryless sources and shows an explicit approximation to the minimum source coding rate required to sustain the probability of exceeding distortion d no greater than Ο΅, which is simpler than known dispersion-based approximations. Our approach takes inspiration in the celebrated classical result stating that the Shannon lower bound to rate-distortion function becomes tight in the limit d β†’ 0. We formulate an abstract version of the Shannon lower bound that recovers both the classical Shannon lower bound and the rate-distortion function itself as special cases. Likewise, we show that a nonasymptotic version of the abstract Shannon lower bound recovers all previously known nonasymptotic converses. A necessary and sufficient condition for the Shannon lower bound to be attained exactly is presented. It is demonstrated that whenever that condition is met, the rate-dispersion function is given simply by the varentropy of the source. Remarkably, all finite alphabet sources with balanced distortion measures satisfy that condition in the range of low distortions. Most continuous sources violate that condition. Still, we show that lattice quantizers closely approach the nonasymptotic Shannon lower bound, provided that the source density is smooth enough and the distortion is low. This implies that fine multidimensional lattice coverings are nearly optimal in the rate-distortion sense even at finite . The achievability proof technique is based on a new bound on the output entropy of lattice quantizers in terms of the differential entropy of the source, the lattice cell size, and a smoothness parameter of the source density. The technique avoids both the usual random coding argument and the simplifying assumption of the presence of a dither signal

    Data compression with low distortion and finite blocklength

    Get PDF
    This paper considers lossy source coding of n-dimensional continuous memoryless sources with low mean-square error distortion and shows a simple, explicit approximation to the minimum source coding rate. More precisely, a nonasymptotic version of Shannon's lower bound is presented. Lattice quantizers are shown to approach that lower bound, provided that the source density is smooth enough and the distortion is low, which implies that fine multidimensional lattice coverings are nearly optimal in the rate-distortion sense even at finite n. The achievability proof technique avoids both the usual random coding argument and the simplifying assumption of the presence of a dither signal

    Online codes for analog signals

    Full text link
    This paper revisits a classical scenario in communication theory: a waveform sampled at regular intervals is to be encoded so as to minimize distortion in its reconstruction, despite noise. This transformation must be online (causal), to enable real-time signaling; and should use no more power than the original signal. The noise model we consider is an "atomic norm" convex relaxation of the standard (discrete alphabet) Hamming-weight-bounded model: namely, adversarial β„“1\ell_1-bounded. In the "block coding" (noncausal) setting, such encoding is possible due to the existence of large almost-Euclidean sections in β„“1\ell_1 spaces, a notion first studied in the work of Dvoretzky in 1961. Our main result is that an analogous result is achievable even causally. Equivalently, our work may be seen as a "lower triangular" version of β„“1\ell_1 Dvoretzky theorems. In terms of communication, the guarantees are expressed in terms of certain time-weighted norms: the time-weighted β„“2\ell_2 norm imposed on the decoder forces increasingly accurate reconstruction of the distant past signal, while the time-weighted β„“1\ell_1 norm on the noise ensures vanishing interference from distant past noise. Encoding is linear (hence easy to implement in analog hardware). Decoding is performed by an LP analogous to those used in compressed sensing

    Quantum soft-covering lemma with applications to rate-distortion coding, resolvability and identification via quantum channels

    Full text link
    We propose a quantum soft-covering problem for a given general quantum channel and one of its output states, which consists in finding the minimum rank of an input state needed to approximate the given channel output. We then prove a one-shot quantum covering lemma in terms of smooth min-entropies by leveraging decoupling techniques from quantum Shannon theory. This covering result is shown to be equivalent to a coding theorem for rate distortion under a posterior (reverse) channel distortion criterion [Atif, Sohail, Pradhan, arXiv:2302.00625]. Both one-shot results directly yield corollaries about the i.i.d. asymptotics, in terms of the coherent information of the channel. The power of our quantum covering lemma is demonstrated by two additional applications: first, we formulate a quantum channel resolvability problem, and provide one-shot as well as asymptotic upper and lower bounds. Secondly, we provide new upper bounds on the unrestricted and simultaneous identification capacities of quantum channels, in particular separating for the first time the simultaneous identification capacity from the unrestricted one, proving a long-standing conjecture of the last author.Comment: 29 pages, 3 figures; v2 fixes an error in Definition 6.1 and various typos and minor issues throughou
    • …
    corecore