12,072 research outputs found

    Convexity and Operational Interpretation of the Quantum Information Bottleneck Function

    Full text link
    In classical information theory, the information bottleneck method (IBM) can be regarded as a method of lossy data compression which focusses on preserving meaningful (or relevant) information. As such it has recently gained a lot of attention, primarily for its applications in machine learning and neural networks. A quantum analogue of the IBM has recently been defined, and an attempt at providing an operational interpretation of the so-called quantum IB function as an optimal rate of an information-theoretic task, has recently been made by Salek et al. However, the interpretation given in that paper has a couple of drawbacks; firstly its proof is based on a conjecture that the quantum IB function is convex, and secondly, the expression for the rate function involves certain entropic quantities which occur explicitly in the very definition of the underlying information-theoretic task, thus making the latter somewhat contrived. We overcome both of these drawbacks by first proving the convexity of the quantum IB function, and then giving an alternative operational interpretation of it as the optimal rate of a bona fide information-theoretic task, namely that of quantum source coding with quantum side information at the decoder, and relate the quantum IB function to the rate region of this task. We similarly show that the related privacy funnel function is convex (both in the classical and quantum case). However, we comment that it is unlikely that the quantum privacy funnel function can characterize the optimal asymptotic rate of an information theoretic task, since even its classical version lacks a certain additivity property which turns out to be essential.Comment: 17 pages, 7 figures; v2: improved presentation and explanations, one new figure; v3: Restructured manuscript. Theorem 2 has been found previously in work by Hsieh and Watanabe; it is now correctly attribute

    Orthogonal Codes for Robust Low-Cost Communication

    Full text link
    Orthogonal coding schemes, known to asymptotically achieve the capacity per unit cost (CPUC) for single-user ergodic memoryless channels with a zero-cost input symbol, are investigated for single-user compound memoryless channels, which exhibit uncertainties in their input-output statistical relationships. A minimax formulation is adopted to attain robustness. First, a class of achievable rates per unit cost (ARPUC) is derived, and its utility is demonstrated through several representative case studies. Second, when the uncertainty set of channel transition statistics satisfies a convexity property, optimization is performed over the class of ARPUC through utilizing results of minimax robustness. The resulting CPUC lower bound indicates the ultimate performance of the orthogonal coding scheme, and coincides with the CPUC under certain restrictive conditions. Finally, still under the convexity property, it is shown that the CPUC can generally be achieved, through utilizing a so-called mixed strategy in which an orthogonal code contains an appropriate composition of different nonzero-cost input symbols.Comment: 2nd revision, accepted for publicatio

    Error Rates of the Maximum-Likelihood Detector for Arbitrary Constellations: Convex/Concave Behavior and Applications

    Get PDF
    Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fading is never good in low dimensions") and optimization of a unitary-precoded OFDM system. For example, the error rate bounds of a unitary-precoded OFDM system with QPSK modulation, which reveal the best and worst precoding, are extended to arbitrary constellations, which may also include coding. The reported results also apply to the interference channel under Gaussian approximation, to the bit error rate when it can be expressed or approximated as a non-negative linear combination of individual symbol error rates, and to coded systems.Comment: accepted by IEEE IT Transaction

    Quantization as Histogram Segmentation: Optimal Scalar Quantizer Design in Network Systems

    Get PDF
    An algorithm for scalar quantizer design on discrete-alphabet sources is proposed. The proposed algorithm can be used to design fixed-rate and entropy-constrained conventional scalar quantizers, multiresolution scalar quantizers, multiple description scalar quantizers, and Wyner–Ziv scalar quantizers. The algorithm guarantees globally optimal solutions for conventional fixed-rate scalar quantizers and entropy-constrained scalar quantizers. For the other coding scenarios, the algorithm yields the best code among all codes that meet a given convexity constraint. In all cases, the algorithm run-time is polynomial in the size of the source alphabet. The algorithm derivation arises from a demonstration of the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph

    Improved bounds for the rate loss of multiresolution source codes

    Get PDF
    We present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering an M-resolution code, the rate loss at the ith resolution with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), where R/sub i/ is the rate achievable by the MRSC at stage i. This rate loss describes the performance degradation of the MRSC compared to the best single-resolution code with the same distortion. For two-resolution source codes, there are three scenarios of particular interest: (i) when both resolutions are equally important; (ii) when the rate loss at the first resolution is 0 (L/sub 1/=0); (iii) when the rate loss at the second resolution is 0 (L/sub 2/=0). The work of Lastras and Berger (see ibid., vol.47, p.918-26, Mar. 2001) gives constant upper bounds for the rate loss of an arbitrary memoryless source in scenarios (i) and (ii) and an asymptotic bound for scenario (iii) as D/sub 2/ approaches 0. We focus on the squared error distortion measure and (a) prove that for scenario (iii) L/sub 1/<1.1610 for all D/sub 2/<0.7250; (c) tighten the Lastras-Berger bound for scenario (i) from L/sub i//spl les/1/2 to L/sub i/<0.3802, i/spl isin/{1,2}; and (d) generalize the bounds for scenarios (ii) and (iii) to M-resolution codes with M/spl ges/2. We also present upper bounds for the rate losses of additive MRSCs (AMRSCs). An AMRSC is a special MRSC where each resolution describes an incremental reproduction and the kth-resolution reconstruction equals the sum of the first k incremental reproductions. We obtain two bounds on the rate loss of AMRSCs: one primarily good for low-rate coding and another which depends on the source entropy

    R\'enyi Bounds on Information Combining

    Full text link
    Bounds on information combining are entropic inequalities that determine how the information, or entropy, of a set of random variables can change when they are combined in certain prescribed ways. Such bounds play an important role in information theory, particularly in coding and Shannon theory. The arguably most elementary kind of information combining is the addition of two binary random variables, i.e. a CNOT gate, and the resulting quantities are fundamental when investigating belief propagation and polar coding. In this work we will generalize the concept to R\'enyi entropies. We give optimal bounds on the conditional R\'enyi entropy after combination, based on a certain convexity or concavity property and discuss when this property indeed holds. Since there is no generally agreed upon definition of the conditional R\'enyi entropy, we consider four different versions from the literature. Finally, we discuss the application of these bounds to the polarization of R\'enyi entropies under polar codes.Comment: 14 pages, accepted for presentation at ISIT 202

    Normalized Entropy Vectors, Network Information Theory and Convex Optimization

    Get PDF
    We introduce the notion of normalized entropic vectors -- slightly different from the standard definition in the literature in that we normalize entropy by the logarithm of the alphabet size. We argue that this definition is more natural for determining the capacity region of networks and, in particular, that it smooths out the irregularities of the space of non-normalized entropy vectors and renders the closure of the resulting space convex (and compact). Furthermore, the closure of the space remains convex even under constraints imposed by memoryless channels internal to the network. It therefore follows that, for a large class of acyclic memoryless networks, the capacity region for an arbitrary set of sources and destinations can be found by maximization of a linear function over the convex set of channel-constrained normalized entropic vectors and some linear constraints. While this may not necessarily make the problem simpler, it certainly circumvents the "infinite-letter characterization" issue, as well as the nonconvexity of earlier formulations, and exposes the core of the problem. We show that the approach allows one to obtain the classical cutset bounds via a duality argument. Furthermore, the approach readily shows that, for acyclic memoryless wired networks, one need only consider the space of unconstrained normalized entropic vectors, thus separating channel and network coding -- a result very recently recognized in the literature

    Capacity Analysis for Continuous Alphabet Channels with Side Information, Part I: A General Framework

    Full text link
    Capacity analysis for channels with side information at the receiver has been an active area of interest. This problem is well investigated for the case of finite alphabet channels. However, the results are not easily generalizable to the case of continuous alphabet channels due to analytic difficulties inherent with continuous alphabets. In the first part of this two-part paper, we address an analytical framework for capacity analysis of continuous alphabet channels with side information at the receiver. For this purpose, we establish novel necessary and sufficient conditions for weak* continuity and strict concavity of the mutual information. These conditions are used in investigating the existence and uniqueness of the capacity-achieving measures. Furthermore, we derive necessary and sufficient conditions that characterize the capacity value and the capacity-achieving measure for continuous alphabet channels with side information at the receiver.Comment: Submitted to IEEE Trans. Inform. Theor
    corecore