113 research outputs found

    Tight Bounds on the R\'enyi Entropy via Majorization with Applications to Guessing and Compression

    Full text link
    This paper provides tight bounds on the R\'enyi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one-to-one. To that end, a tight lower bound on the R\'enyi entropy of a discrete random variable with a finite support is derived as a function of the size of the support, and the ratio of the maximal to minimal probability masses. This work was inspired by the recently published paper by Cicalese et al., which is focused on the Shannon entropy, and it strengthens and generalizes the results of that paper to R\'enyi entropies of arbitrary positive orders. In view of these generalized bounds and the works by Arikan and Campbell, non-asymptotic bounds are derived for guessing moments and lossless data compression of discrete memoryless sources.Comment: The paper was published in the Entropy journal (special issue on Probabilistic Methods in Information Theory, Hypothesis Testing, and Coding), vol. 20, no. 12, paper no. 896, November 22, 2018. Online available at https://www.mdpi.com/1099-4300/20/12/89

    Joint Unitary Triangularization for MIMO Networks

    Full text link
    This work considers communication networks where individual links can be described as MIMO channels. Unlike orthogonal modulation methods (such as the singular-value decomposition), we allow interference between sub-channels, which can be removed by the receivers via successive cancellation. The degrees of freedom earned by this relaxation are used for obtaining a basis which is simultaneously good for more than one link. Specifically, we derive necessary and sufficient conditions for shaping the ratio vector of sub-channel gains of two broadcast-channel receivers. We then apply this to two scenarios: First, in digital multicasting we present a practical capacity-achieving scheme which only uses scalar codes and linear processing. Then, we consider the joint source-channel problem of transmitting a Gaussian source over a two-user MIMO channel, where we show the existence of non-trivial cases, where the optimal distortion pair (which for high signal-to-noise ratios equals the optimal point-to-point distortions of the individual users) may be achieved by employing a hybrid digital-analog scheme over the induced equivalent channel. These scenarios demonstrate the advantage of choosing a modulation basis based upon multiple links in the network, thus we coin the approach "network modulation".Comment: Submitted to IEEE Tran. Signal Processing. Revised versio

    Infinite anti-uniform sources

    Get PDF
    6 pagesInternational audienceIn this paper we consider the class of anti-uniform Huffman (AUH) codes for sources with infinite alphabet. Poisson, negative binomial, geometric and exponential distributions lead to infinite anti-uniform sources for some ranges of their parameters. Huffman coding of these sources results in AUH codes. We prove that as a result of this encoding, we obtain sources with memory. For these sources we attach the graph and derive the transition matrix between states, the state probabilities and the entropy. If c0 and c1 denote the costs for storing or transmission of symbols "0" and "1", respectively, we compute the average cost for these AUH codes

    Quasi-probability representations of quantum computing

    Get PDF
    If universal quantum computing is Tartarus, the mythical underworld where Titans are tormented, then magic is Charon, the ferryman tasked to get you there. Shifting perspective often reveals simpler solutions to hard problems. In this the- sis, we shift our view of quantum computing from the Hilbert space picture to a geometric picture of a discrete phase space on which computational elements can be represented through quasi-probability distributions. In this new picture, we recog- nize new ways to refine the theory of quantum computing. Magic states play a crucial role in upgrading fault-tolerant computational frame- works beyond classically efficient capabilities and simulation techniques. Theories of magic have so far attempted to quantify this computational element via coarse- grained monotones and determine how these states may be efficiently transformed into useful forms. Using a quasi-probability representation of quantum states on a discrete phase space, it is known that we can identify useful magic states by the presence of negative probabilities. This thesis utilizes this representation to develop a novel statistical mechanical framework that provides a more fine-grained descrip- tion of magic state transformations as well as to develop classical algorithms that simulate quantum circuits containing magic states more efficiently. We show that majorization allows us to quantify disorder in the Wigner repre- sentation, leading to entropic bounds on magic distillation rates. The bounds are shown to be more restraining than previous bounds based on monotones and can be used to incorporate features of the distillation protocol, such as invariances of CSS protocols, as well as hardware physics, such as temperature dependence and sys- tem Hamiltonians. We also show that a subset of single-shot R ́enyi entropies remain well-defined on quasi-probability distributions, are fully meaningful in terms of data processing and can acquire negative values that signal magic. Moreover, we propose classical sub-routines that reduce the sampling overhead for important classical samplers with run-times that depend on the negativity present in the Wigner representation. We show that the run-times of our sub-routines scale polynomially in circuit size and gate depth. We also demonstrate numerically that our methods provide improved scaling in the sampling overhead for random circuits with Clifford+T and Haar-random gates, while the performance of our methods compares favorably with prior simulators based on quasi-probability representations as the number of non-Clifford gates increases.Open Acces

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio
    corecore