19 research outputs found

    Useful dual functional of entropic information measures

    Get PDF
    There are entropic functionals galore, but not simple objective measures to distinguish between them. We remedy this situation here by appeal to Born’s proposal, of almost a hundred years ago, that the square modulus of any wave function |ψ| 2 be regarded as a probability distribution P. the usefulness of using information measures like Shannon’s in this pure-state context has been highlighted in [Phys. Lett. A 1993, 181, 446]. Here we will apply the notion with the purpose of generating a dual functional [FαR : {SQ} −→ R +], which maps entropic functionals onto positive real numbers. In such an endeavor, we use as standard ingredients the coherent states of the harmonic oscillator (CHO), which are unique in the sense of possessing minimum uncertainty. This use is greatly facilitated by the fact that the CHO can be given analytic, compact closed form as shown in [Rev. Mex. Fis. E 2019, 65, 191]. Rewarding insights are to be obtained regarding the comparison between several standard entropic measures.Fil: Plastino, Ángel Luis. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Departamento de Física; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; ArgentinaFil: Rocca, Mario Carlos. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Departamento de Física; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; ArgentinaFil: Pennini, Flavia Catalina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; Argentina. Universidad Nacional de La Pampa; Argentina. Universidad Católica del Norte; Chil

    A note on Onicescu's informational energy and correlation coefficient in exponential families

    Full text link
    The informational energy of Onicescu is a positive quantity that measures the amount of uncertainty of a random variable like Shannon's entropy. In this note, we report closed-form formula for Onicescu's informational energy and correlation coefficient when the densities belong to an exponential family. We also report as a byproduct a closed-form formula for the Cauchy-Schwarz divergence between densities of an exponential family.Comment: 13 page

    On a generalization of the Jensen-Shannon divergence and the JS-symmetrization of distances relying on abstract means

    Full text link
    The Jensen-Shannon divergence is a renown bounded symmetrization of the unbounded Kullback-Leibler divergence which measures the total Kullback-Leibler divergence to the average mixture distribution. However the Jensen-Shannon divergence between Gaussian distributions is not available in closed-form. To bypass this problem, we present a generalization of the Jensen-Shannon (JS) divergence using abstract means which yields closed-form expressions when the mean is chosen according to the parametric family of distributions. More generally, we define the JS-symmetrizations of any distance using generalized statistical mixtures derived from abstract means. In particular, we first show that the geometric mean is well-suited for exponential families, and report two closed-form formula for (i) the geometric Jensen-Shannon divergence between probability densities of the same exponential family, and (ii) the geometric JS-symmetrization of the reverse Kullback-Leibler divergence. As a second illustrating example, we show that the harmonic mean is well-suited for the scale Cauchy distributions, and report a closed-form formula for the harmonic Jensen-Shannon divergence between scale Cauchy distributions. We also define generalized Jensen-Shannon divergences between matrices (e.g., quantum Jensen-Shannon divergences) and consider clustering with respect to these novel Jensen-Shannon divergences.Comment: 30 page

    Vector-valued distribution regression: a simple and consistent approach

    Get PDF
    We address the distribution regression problem (DRP): regressing on the domain of probability measures, in the two-stage sampled setup when only samples from the distributions are given. The DRP formulation offers a unified framework for several important tasks in statistics and machine learning including multi-instance learning (MIL), or point estimation problems without analytical solution. Despite the large number of MIL heuristics, essentially there is no theoretically grounded approach to tackle the DRP problem in two-stage sampled case. To the best of our knowledge, the only existing technique with consistency guarantees requires kernel density estimation as an intermediate step (which often scale poorly in practice), and the domain of the distributions to be compact Euclidean. We analyse a simple (analytically computable) ridge regression alternative to DRP: we embed the distributions to a reproducing kernel Hilbert space, and learn the regressor from the embeddings to the outputs. We show that this scheme is consistent in the two-stage sampled setup under mild conditions, for probability measure inputs defined on separable, topological domains endowed with kernels, with vector-valued outputs belonging to an arbitrary separable Hilbert space. Specially, choosing the kernel on the space of embedded distributions to be linear and the output space to the real line, we get the consistency of set kernels in regression, which was a 15-year-old open question. In our talk we are going to present (i) the main ideas and results of consistency, (ii) concrete kernel constructions on mean embedded distributions, and (iii) two applications (supervised entropy learning, aerosol prediction based on multispectral satellite images) demonstrating the efficiency of our approach

    Regression on Probability Measures: A Simple and Consistent Algorithm

    Get PDF
    We address the distribution regression problem: we regress from probability measures to Hilbert-space valued outputs, where only samples are available from the input distributions. Many important statistical and machine learning problems can be phrased within this framework including point estimation tasks without analytical solution, or multi-instance learning. However, due to the two-stage sampled nature of the problem, the theoretical analysis becomes quite challenging: to the best of our knowledge the only existing method with performance guarantees requires density estimation (which of ten performs poorly in practise) and the distributions to be defined on a compact Euclidean domain. We present a simple, analytically tractable alternative to solve the distribution regression problem: we embed the distributions to a reproducing kernel Hilbert space and perform ridge regression from the embedded distributions to the outputs. We prove that this scheme is consistent under mild conditions (for distributions on separable topological domains endowed with kernels), and construct explicit finite sample bounds on the excess risk as a function of the sample numbers and the problem difficulty, which hold with high probability. Specifically, we establish the consistency of set kernels in regression, which was a 15-year-old-openquestion, and also present new kernels on embedded distributions. The practical efficiency of the studied technique is illustrated in supervised entropy learning and aerosol prediction using multispectral satellite images
    corecore