207 research outputs found

    Informational Divergence Approximations to Product Distributions

    Full text link
    The minimum rate needed to accurately approximate a product distribution based on an unnormalized informational divergence is shown to be a mutual information. This result subsumes results of Wyner on common information and Han-Verd\'{u} on resolvability. The result also extends to cases where the source distribution is unknown but the entropy is known

    Fixed-to-Variable Length Distribution Matching

    Full text link
    Fixed-to-variable length (f2v) matchers are used to reversibly transform an input sequence of independent and uniformly distributed bits into an output sequence of bits that are (approximately) independent and distributed according to a target distribution. The degree of approximation is measured by the informational divergence between the output distribution and the target distribution. An algorithm is developed that efficiently finds optimal f2v codes. It is shown that by encoding the input bits blockwise, the informational divergence per bit approaches zero as the block length approaches infinity. A relation to data compression by Tunstall coding is established.Comment: 5 pages, essentially the ISIT 2013 versio

    Informational Divergence and Entropy Rate on Rooted Trees with Probabilities

    Full text link
    Rooted trees with probabilities are used to analyze properties of a variable length code. A bound is derived on the difference between the entropy rates of the code and a memoryless source. The bound is in terms of normalized informational divergence. The bound is used to derive converses for exact random number generation, resolution coding, and distribution matching.Comment: 5 pages. With proofs and illustrating exampl

    Greedy Algorithms for Optimal Distribution Approximation

    Full text link
    The approximation of a discrete probability distribution t\mathbf{t} by an MM-type distribution p\mathbf{p} is considered. The approximation error is measured by the informational divergence D(t∥p)\mathbb{D}(\mathbf{t}\Vert\mathbf{p}), which is an appropriate measure, e.g., in the context of data compression. Properties of the optimal approximation are derived and bounds on the approximation error are presented, which are asymptotically tight. It is shown that MM-type approximations that minimize either D(t∥p)\mathbb{D}(\mathbf{t}\Vert\mathbf{p}), or D(p∥t)\mathbb{D}(\mathbf{p}\Vert\mathbf{t}), or the variational distance ∥p−t∥1\Vert\mathbf{p}-\mathbf{t}\Vert_1 can all be found by using specific instances of the same general greedy algorithm.Comment: 5 page
    • …
    corecore