479,480 research outputs found

    Hypothesis elimination on a quantum computer

    Full text link
    Hypothesis elimination is a special case of Bayesian updating, where each piece of new data rules out a set of prior hypotheses. We describe how to use Grover's algorithm to perform hypothesis elimination for a class of probability distributions encoded on a register of qubits, and establish a lower bound on the required computational resources.Comment: 8 page

    Generative Supervised Classification Using Dirichlet Process Priors.

    Get PDF
    Choosing the appropriate parameter prior distributions associated to a given Bayesian model is a challenging problem. Conjugate priors can be selected for simplicity motivations. However, conjugate priors can be too restrictive to accurately model the available prior information. This paper studies a new generative supervised classifier which assumes that the parameter prior distributions conditioned on each class are mixtures of Dirichlet processes. The motivations for using mixtures of Dirichlet processes is their known ability to model accurately a large class of probability distributions. A Monte Carlo method allowing one to sample according to the resulting class-conditional posterior distributions is then studied. The parameters appearing in the class-conditional densities can then be estimated using these generated samples (following Bayesian learning). The proposed supervised classifier is applied to the classification of altimetric waveforms backscattered from different surfaces (oceans, ices, forests, and deserts). This classification is a first step before developing tools allowing for the extraction of useful geophysical information from altimetric waveforms backscattered from nonoceanic surfaces

    Analytic crossing probabilities for certain barriers by Brownian motion

    Full text link
    We calculate crossing probabilities and one-sided last exit time densities for a class of moving barriers on an interval [0,T][0,T] via Schwartz distributions. We derive crossing probabilities and first hitting time densities for another class of barriers on [0,T][0,T] by proving a Schwartz distribution version of the method of images. Analytic expressions for crossing probabilities and related densities are given for new explicit and semi-explicit barriers.Comment: Published in at http://dx.doi.org/10.1214/07-AAP488 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The characterization of a class of probability measures by multiplicative renormalization

    Get PDF
    Abstract. We use the multiplicative renormalization method to characterize a class of probability measures on R determined by five parameters. This class of probability measures contains the arcsine and the Wigner semi-circle distributions (the vacuum distributions of the field operators of interacting Fock spaces related to the Anderson model), as well as new nonsymmetric distributions. The corresponding orthogonal polynomials and Jacobi–Szegö parameters are derived from the orthogonal-polynomial generating functions. These orthogonal polynomials can be expressed in terms of the Chebyshev polynomials of the second kind. 1

    Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means

    Full text link
    Bayesian classification labels observations based on given prior information, namely class-a priori and class-conditional probabilities. Bayes' risk is the minimum expected classification cost that is achieved by the Bayes' test, the optimal decision rule. When no cost incurs for correct classification and unit cost is charged for misclassification, Bayes' test reduces to the maximum a posteriori decision rule, and Bayes risk simplifies to Bayes' error, the probability of error. Since calculating this probability of error is often intractable, several techniques have been devised to bound it with closed-form formula, introducing thereby measures of similarity and divergence between distributions like the Bhattacharyya coefficient and its associated Bhattacharyya distance. The Bhattacharyya upper bound can further be tightened using the Chernoff information that relies on the notion of best error exponent. In this paper, we first express Bayes' risk using the total variation distance on scaled distributions. We then elucidate and extend the Bhattacharyya and the Chernoff upper bound mechanisms using generalized weighted means. We provide as a byproduct novel notions of statistical divergences and affinity coefficients. We illustrate our technique by deriving new upper bounds for the univariate Cauchy and the multivariate tt-distributions, and show experimentally that those bounds are not too distant to the computationally intractable Bayes' error.Comment: 22 pages, include R code. To appear in Pattern Recognition Letter
    • 

    corecore