228,388 research outputs found

    Informational Divergence Approximations to Product Distributions

    Full text link
    The minimum rate needed to accurately approximate a product distribution based on an unnormalized informational divergence is shown to be a mutual information. This result subsumes results of Wyner on common information and Han-Verd\'{u} on resolvability. The result also extends to cases where the source distribution is unknown but the entropy is known

    Some results on moments and cumulants.

    Get PDF
    In the present paper we discuss various results related to moments and cumulants of probability distributions and approximations to probability distributions. As the approximations are not necessarily probability distributions themselves, we shall apply the concept of moments and cumulants to more general functions. Recursions are deduced for the moments and cumulants of functions in the form Rka,b as defined by Dhaene & Sundt (1994). We deduce a simple relation between the DePril transform and the cumulants of a function. This relation is appplied to some classes of approximations to probability distributions, in particular the approximations of Hipp and DePril.

    The Multivariate Watson Distribution: Maximum-Likelihood Estimation and other Aspects

    Full text link
    This paper studies fundamental aspects of modelling data using multivariate Watson distributions. Although these distributions are natural for modelling axially symmetric data (i.e., unit vectors where \pm \x are equivalent), for high-dimensions using them can be difficult. Why so? Largely because for Watson distributions even basic tasks such as maximum-likelihood are numerically challenging. To tackle the numerical difficulties some approximations have been derived---but these are either grossly inaccurate in high-dimensions (\emph{Directional Statistics}, Mardia & Jupp. 2000) or when reasonably accurate (\emph{J. Machine Learning Research, W. & C.P., v2}, Bijral \emph{et al.}, 2007, pp. 35--42), they lack theoretical justification. We derive new approximations to the maximum-likelihood estimates; our approximations are theoretically well-defined, numerically accurate, and easy to compute. We build on our parameter estimation and discuss mixture-modelling with Watson distributions; here we uncover a hitherto unknown connection to the "diametrical clustering" algorithm of Dhillon \emph{et al.} (\emph{Bioinformatics}, 19(13), 2003, pp. 1612--1619).Comment: 24 pages; extensively updated numerical result

    Compressing Probability Distributions

    Full text link
    We show how to store good approximations of probability distributions in small space

    Power of edge exclusion tests in graphical gaussian models

    No full text
    Asymptotic multivariate normal approximations to the joint distributions of edge exclusion test statistics for saturated graphical Gaussian models are derived. Non-signed and signed square-root versions of the likelihood ratio, Wald and score test statistics are considered. Non-central chi-squared approximations are also considered for the non-signed versions. These approximations are used to estimate the power of edge exclusion tests and an example is presented.<br/

    Heterogeneous Basket Options Pricing Using Analytical Approximations

    Get PDF
    This paper proposes the use of analytical approximations to price an heterogeneous basket option combining commodity prices, foreign currencies and zero-coupon bonds. We examine the performance of three moment matching approximations: inverse gamma, Edgeworth expansion around the lognormal and Johnson family distributions. Since there is no closed-form formula for basket options, we carry out Monte Carlo simulations to generate the benchmark values. We perfom a simulation experiment on a whole set of options based on a random choice of parameters. Our results show that the lognormal and Johnson distributions give the most accurate results.Basket Options, Options Pricing, Analytical Approximations, Monte Carlo Simulation
    corecore