1,182 research outputs found

    Approximate Integrated Likelihood via ABC methods

    Full text link
    We propose a novel use of a recent new computational tool for Bayesian inference, namely the Approximate Bayesian Computation (ABC) methodology. ABC is a way to handle models for which the likelihood function may be intractable or even unavailable and/or too costly to evaluate; in particular, we consider the problem of eliminating the nuisance parameters from a complex statistical model in order to produce a likelihood function depending on the quantity of interest only. Given a proper prior for the entire vector parameter, we propose to approximate the integrated likelihood by the ratio of kernel estimators of the marginal posterior and prior for the quantity of interest. We present several examples.Comment: 28 pages, 8 figure

    Jeffreys priors for mixture estimation

    Full text link
    While Jeffreys priors usually are well-defined for the parameters of mixtures of distributions, they are not available in closed form. Furthermore, they often are improper priors. Hence, they have never been used to draw inference on the mixture parameters. We study in this paper the implementation and the properties of Jeffreys priors in several mixture settings, show that the associated posterior distributions most often are improper, and then propose a noninformative alternative for the analysis of mixtures

    Approximate Bayesian inference in semiparametric copula models

    Full text link
    We describe a simple method for making inference on a functional of a multivariate distribution. The method is based on a copula representation of the multivariate distribution and it is based on the properties of an Approximate Bayesian Monte Carlo algorithm, where the proposed values of the functional of interest are weighed in terms of their empirical likelihood. This method is particularly useful when the "true" likelihood function associated with the working model is too costly to evaluate or when the working model is only partially specified.Comment: 27 pages, 18 figure

    Jeffreys priors for mixture estimation: properties and alternatives

    Get PDF
    While Jeffreys priors usually are well-defined for the parameters of mixtures of distributions, they are not available in closed form. Furthermore, they often are improper priors. Hence, they have never been used to draw inference on the mixture parameters. The implementation and the properties of Jeffreys priors in several mixture settings are studied. It is shown that the associated posterior distributions most often are improper. Nevertheless, the Jeffreys prior for the mixture weights conditionally on the parameters of the mixture components will be shown to have the property of conservativeness with respect to the number of components, in case of overfitted mixture and it can be therefore used as a default priors in this context.Comment: arXiv admin note: substantial text overlap with arXiv:1511.0314

    Accelerating Metropolis-Hastings algorithms: Delayed acceptance with prefetching

    Full text link
    MCMC algorithms such as Metropolis-Hastings algorithms are slowed down by the computation of complex target distributions as exemplified by huge datasets. We offer in this paper an approach to reduce the computational costs of such algorithms by a simple and universal divide-and-conquer strategy. The idea behind the generic acceleration is to divide the acceptance step into several parts, aiming at a major reduction in computing time that outranks the corresponding reduction in acceptance probability. The division decomposes the "prior x likelihood" term into a product such that some of its components are much cheaper to compute than others. Each of the components can be sequentially compared with a uniform variate, the first rejection signalling that the proposed value is considered no further, This approach can in turn be accelerated as part of a prefetching algorithm taking advantage of the parallel abilities of the computer at hand. We illustrate those accelerating features on a series of toy and realistic examples.Comment: 20 pages, 12 figures, 2 tables, submitte

    Constraining the Warm Dark Matter Particle Mass through Ultra-Deep UV Luminosity Functions at z=2

    Get PDF
    We compute the mass function of galactic dark matter halos for different values of the Warm Dark Matter (WDM) particle mass m_X and compare it with the abundance of ultra-faint galaxies derived from the deepest UV luminosity function available so far at redshift z~2. The magnitude limit M_UV=-13 reached by such observations allows us to probe the WDM mass functions down to scales close to or smaller than the half-mass mode mass scale ~10^9 M_sun. This allowed for an efficient discrimination among predictions for different m_X which turn out to be independent of the star formation efficiency adopted to associate the observed UV luminosities of galaxies to the corresponding dark matter masses. Adopting a conservative approach to take into account the existing theoretical uncertainties in the galaxy halo mass function, we derive a robust limit m_X>1.8 keV for the mass of thermal relic WDM particles when comparing with the measured abundance of the faintest galaxies, while m_X>1.5 keV is obtained when we compare with the Schechter fit to the observed luminosity function. The corresponding lower limit for sterile neutrinos depends on the modeling of the production mechanism; for instance m_sterile > 4 keV holds for the Shi-Fuller mechanism. We discuss the impact of observational uncertainties on the above bound on m_X. As a baseline for comparison with forthcoming observations from the HST Frontier Field, we provide predictions for the abundance of faint galaxies with M_UV=-13 for different values of m_X and of the star formation efficiency, valid up to z~4.Comment: 14 pages, 3 figures. Accepted for publication in The Astrophysical Journa
    • …
    corecore