97,085 research outputs found

    Adaptive Mixture of Student-t Distributions as a Flexible Candidate Distribution for Efficient Simulation: The R Package AdMit

    Get PDF
    This paper presents the R package AdMit which provides flexible functions to approximate a certain target distribution and to efficiently generate a sample of random draws from it, given only a kernel of the target density function. The core algorithm consists of the function AdMit which fits an adaptive mixture of Student-t distributions to the density of interest. Then, importance sampling or the independence chain Metropolis-Hastings algorithm is used to obtain quantities of interest for the target density, using the fitted mixture as the importance or candidate density. The estimation procedure is fully automatic and thus avoids the time-consuming and difficult task of tuning a sampling algorithm. The relevance of the package is shown in two examples. The first aims at illustrating in detail the use of the functions provided by the package in a bivariate bimodal distribution. The second shows the relevance of the adaptive mixture procedure through the Bayesian estimation of a mixture of ARCH model fitted to foreign exchange log-returns data. The methodology is compared to standard cases of importance sampling and the Metropolis-Hastings algorithm using a naive candidate and with the Griddy-Gibbs approach.

    Adaptive Mixture of Student-t distributions as a Flexible Candidate Distribution for Efficient Simulation

    Get PDF
    This paper presents the R package AdMit which provides functions to approximate and sample from a certain target distribution given only a kernel of the target density function. The core algorithm consists in the function AdMit which fits an adaptive mixture of Student-t distributions to the density of interest via its kernel function. Then, importance sampling or the independence chain Metropolis- Hastings algorithm are used to obtain quantities of interest for the target density, using the fitted mixture as the importance or candidate density. The estimation procedure is fully automatic and thus avoids the time-consuming and difficult task of tuning a sampling algorithm. The relevance of the package is shown in two examples. The first aims at illustrating in detail the use of the functions provided by the package in a bivariate bimodal distribution. The second shows the relevance of the adaptive mixture procedure through the Bayesian estimation of a mixture of ARCH model fitted to foreign exchange log-returns data. The methodology is compared to standard cases of importance sampling and the Metropolis-Hastings algorithm using a naive candidate and with the Griddy-Gibbs approach

    Adaptive mixture of Student-t distributions as a flexible candidate distribution for efficient simulation: the R package AdMit

    Get PDF
    textabstractThis paper presents the R package AdMit which provides flexible functions to approximate a certain target distribution and to efficiently generate a sample of random draws from it, given only a kernel of the target density function. The core algorithm consists of the function AdMit which fits an adaptive mixture of Student-t distributions to the density of interest. Then, importance sampling or the independence chain Metropolis-Hastings algorithm is used to obtain quantities of interest for the target density, using the fitted mixture as the importance or candidate density. The estimation procedure is fully automatic and thus avoids the time-consuming and difficult task of tuning a sampling algorithm. The relevance of the package is shown in two examples. The first aims at illustrating in detail the use of the functions provided by the package in a bivariate bimodal distribution. The second shows the relevance of the adaptive mixture procedure through the Bayesian estimation of a mixture of ARCH model fitted to foreign exchange log-returns data. The methodology is compared to standard cases of importance sampling and the Metropolis-Hastings algorithm using a naive candidate and with the Griddy-Gibbs approach

    Convergence rates for Bayesian density estimation of infinite-dimensional exponential families

    Full text link
    We study the rate of convergence of posterior distributions in density estimation problems for log-densities in periodic Sobolev classes characterized by a smoothness parameter p. The posterior expected density provides a nonparametric estimation procedure attaining the optimal minimax rate of convergence under Hellinger loss if the posterior distribution achieves the optimal rate over certain uniformity classes. A prior on the density class of interest is induced by a prior on the coefficients of the trigonometric series expansion of the log-density. We show that when p is known, the posterior distribution of a Gaussian prior achieves the optimal rate provided the prior variances die off sufficiently rapidly. For a mixture of normal distributions, the mixing weights on the dimension of the exponential family are assumed to be bounded below by an exponentially decreasing sequence. To avoid the use of infinite bases, we develop priors that cut off the series at a sample-size-dependent truncation point. When the degree of smoothness is unknown, a finite mixture of normal priors indexed by the smoothness parameter, which is also assigned a prior, produces the best rate. A rate-adaptive estimator is derived.Comment: Published at http://dx.doi.org/10.1214/009053606000000911 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    SPADES and mixture models

    Full text link
    This paper studies sparse density estimation via â„“1\ell_1 penalization (SPADES). We focus on estimation in high-dimensional mixture models and nonparametric adaptive density estimation. We show, respectively, that SPADES can recover, with high probability, the unknown components of a mixture of probability densities and that it yields minimax adaptive density estimates. These results are based on a general sparsity oracle inequality that the SPADES estimates satisfy. We offer a data driven method for the choice of the tuning parameter used in the construction of SPADES. The method uses the generalized bisection method first introduced in \citebb09. The suggested procedure bypasses the need for a grid search and offers substantial computational savings. We complement our theoretical results with a simulation study that employs this method for approximations of one and two-dimensional densities with mixtures. The numerical results strongly support our theoretical findings.Comment: Published in at http://dx.doi.org/10.1214/09-AOS790 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Adaptive Smoothing Parameter in Kernel Density Estimation and Parameter Estimation in Normal Mixture Distributions

    Get PDF
    Kernel density estimation is a widely used tool in nonparametric density estimation procedures. Choice of a kernel function and a smoothing parameter are two important issues in implementing kernel density estimation procedures. In this paper, four different kernel functions are considered in implementing an adaptive selection procedure in choosing the smoothing parameter. In simulation, a skewed bimodal density which is a mixture of two normal distributions is considered along with the standard normal and the standard exponential densities. In skewed bimodal data, parameter estimation is also explored in the context of the parameter estimation in mixtures of normal distributions. Maximum likelihood estimation procedure is implemented in parameter estimation in mixtures of normal distributions
    • …
    corecore