1,529 research outputs found

    The Generalized Method of Moments in the Bayesian Framework and a Model of Moment Selection Criterion

    Get PDF
    While the classical framework has a rich set of limited information procedures such as GMM and other related methods, the situation is not so in the Bayesian framework. We develop a limited information procedure in the Bayesian framework that does not require the knowledge of the likelihood function. The developed procedure is a Bayesian counterpart of the classical GMM but has advantages over the classical GMM in practical applications. The necessary limited information for our approach is a set of moment conditions, instead of the likelihood function, which has a counterpart in the classical GMM. Such moment conditions in the Bayesian framework are obtained from the equivalence condition of the Bayes' estimator and the GMM estimator. From such moment conditions, a posterior probability measure is derived that forms the basis of our limited information Bayesian procedure. This limited information posterior has some desirable properties for small and large sample analyses. An alternative approach is also provided in this paper for deriving a limited information posterior based on a variant of the empirical likelihood method where an empirical likelihood is obtained from the moment conditions of GMM. This alternative approach yields asymptotically the same result as the approach explained above. Based on our limited information method, we develop a procedure for selecting the moment for GMM. This moment selection procedure is an extension of the Bayesian model selection procedure to the Bayesian semi-parametric, limited information framework. It is shown that under some conditions the proposed moment selection procedure is a consistent decision rule.

    Marginal Likelihood Estimation with the Cross-Entropy Method

    Get PDF
    We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving women's labor market participation and U.S. macroeconomic time series. In both applications the proposed CE method compares favorably to existing estimators

    Robust Bayesian inference via coarsening

    Full text link
    The standard approach to Bayesian inference is based on the assumption that the distribution of the data belongs to the chosen model class. However, even a small violation of this assumption can have a large impact on the outcome of a Bayesian procedure. We introduce a simple, coherent approach to Bayesian inference that improves robustness to perturbations from the model: rather than condition on the data exactly, one conditions on a neighborhood of the empirical distribution. When using neighborhoods based on relative entropy estimates, the resulting "coarsened" posterior can be approximated by simply tempering the likelihood---that is, by raising it to a fractional power---thus, inference is often easily implemented with standard methods, and one can even obtain analytical solutions when using conjugate priors. Some theoretical properties are derived, and we illustrate the approach with real and simulated data, using mixture models, autoregressive models of unknown order, and variable selection in linear regression

    Gibbs Max-margin Topic Models with Data Augmentation

    Full text link
    Max-margin learning is a powerful approach to building classifiers and structured output predictors. Recent work on max-margin supervised topic models has successfully integrated it with Bayesian topic models to discover discriminative latent semantic structures and make accurate predictions for unseen testing data. However, the resulting learning problems are usually hard to solve because of the non-smoothness of the margin loss. Existing approaches to building max-margin supervised topic models rely on an iterative procedure to solve multiple latent SVM subproblems with additional mean-field assumptions on the desired posterior distributions. This paper presents an alternative approach by defining a new max-margin loss. Namely, we present Gibbs max-margin supervised topic models, a latent variable Gibbs classifier to discover hidden topic representations for various tasks, including classification, regression and multi-task learning. Gibbs max-margin supervised topic models minimize an expected margin loss, which is an upper bound of the existing margin loss derived from an expected prediction rule. By introducing augmented variables and integrating out the Dirichlet variables analytically by conjugacy, we develop simple Gibbs sampling algorithms with no restricting assumptions and no need to solve SVM subproblems. Furthermore, each step of the "augment-and-collapse" Gibbs sampling algorithms has an analytical conditional distribution, from which samples can be easily drawn. Experimental results demonstrate significant improvements on time efficiency. The classification performance is also significantly improved over competitors on binary, multi-class and multi-label classification tasks.Comment: 35 page
    corecore