586 research outputs found
Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Estimation of Stochastic Volatility Models
Bayesian inference for stochastic volatility models using MCMC methods highly depends
on actual parameter values in terms of sampling efficiency. While draws from the posterior
utilizing the standard centered parameterization break down when the volatility of volatility parameter
in the latent state equation is small, non-centered versions of the model show deficiencies
for highly persistent latent variable series. The novel approach of ancillarity-sufficiency
interweaving has recently been shown to aid in overcoming these issues for a broad class of
multilevel models. In this paper, we demonstrate how such an interweaving strategy can be
applied to stochastic volatility models in order to greatly improve sampling efficiency for all
parameters and throughout the entire parameter range. Moreover, this method of "combining
best of different worlds" allows for inference for parameter constellations that have previously
been infeasible to estimate without the need to select a particular parameterization beforehand.Series: Research Report Series / Department of Statistics and Mathematic
Hierarchical shrinkage in time-varying parameter models
In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach
From here to infinity - sparse finite versus Dirichlet process mixtures in model-based clustering
In model-based-clustering mixture models are used to group data points into
clusters. A useful concept introduced for Gaussian mixtures by Malsiner Walli
et al (2016) are sparse finite mixtures, where the prior distribution on the
weight distribution of a mixture with components is chosen in such a way
that a priori the number of clusters in the data is random and is allowed to be
smaller than with high probability. The number of cluster is then inferred
a posteriori from the data.
The present paper makes the following contributions in the context of sparse
finite mixture modelling. First, it is illustrated that the concept of sparse
finite mixture is very generic and easily extended to cluster various types of
non-Gaussian data, in particular discrete data and continuous multivariate data
arising from non-Gaussian clusters. Second, sparse finite mixtures are compared
to Dirichlet process mixtures with respect to their ability to identify the
number of clusters. For both model classes, a random hyper prior is considered
for the parameters determining the weight distribution. By suitable matching of
these priors, it is shown that the choice of this hyper prior is far more
influential on the cluster solution than whether a sparse finite mixture or a
Dirichlet process mixture is taken into consideration.Comment: Accepted versio
Bayesian Inference in the Multinomial Logit Model
The multinomial logit model (MNL) possesses a latent variable
representation in terms of random variables following a multivariate logistic distribution. Based on multivariate finite mixture approximations of the multivariate
logistic distribution, various data-augmented Metropolis-Hastings algorithms are developed for a Bayesian inference of the MNL model
Keeping the balance—Bridge sampling for marginal likelihood estimation in finite mixture, mixture of experts and Markov mixture models
Finite mixture models and their extensions to Markov mixture andmixture of experts models are very popular in analysing data of various kind.A challenge for these models is choosing the number of components basedon marginal likelihoods. The present paper suggests two innovative, genericbridge sampling estimators of the marginal likelihood that are based on con-structing balanced importance densities from the conditional densities arisingduring Gibbs sampling. The full permutation bridge sampling estimator is de-rived from considering all possible permutations of the mixture labels for asubset of these densities. For the double random permutation bridge samplingestimator, two levels of random permutations are applied, first to permute thelabels of the MCMC draws and second to randomly permute the labels ofthe conditional densities arising during Gibbs sampling. Various applicationsshow very good performance of these estimators in comparison to importanceand to reciprocal importance sampling estimators derived from the same im-portance densities
Applied State Space Modelling of Non-Gaussian Time Series using Integration-based Kalman-filtering
The main topic of the paper is on-line filtering for non-Gaussian dynamic (state space) models by approximate computation of the first two posterior moments using efficient numerical integration. Based on approximating the prior of the state vector by a normal density, we prove that the posterior moments of the state vector are related to the posterior moments of the linear predictor in a simple way. For the linear predictor Gauss-Hermite integration is carried out with automatic reparametrization based on an approximate posterior mode filter. We illustrate how further topics in applied state space modelling such as estimating hyperparameters, computing model likelihoods and predictive residuals, are managed by integration-based Kalman-filtering. The methodology derived in the paper is applied to on-line monitoring of ecological time series and filtering for small count data. (author's abstract)Series: Forschungsberichte / Institut für Statisti
Vertex finding by sparse model-based clustering
The application of sparse model-based clustering to the problem of primary vertex finding is discussed. The observed z-positions of the charged primary tracks in a bunch crossing are modeled by a Gaussian mixture. The mixture parameters are estimated via Markov Chain Monte Carlo (MCMC). Sparsity is achieved by an appropriate prior on the mixture weights. The results are shown and compared to clustering by the expectation-maximization (EM) algorithm
Identifying Mixtures of Mixtures Using Bayesian Estimation
The use of a finite mixture of normal distributions in model-based clustering
allows to capture non-Gaussian data clusters. However, identifying the clusters
from the normal components is challenging and in general either achieved by
imposing constraints on the model or by using post-processing procedures.
Within the Bayesian framework we propose a different approach based on sparse
finite mixtures to achieve identifiability. We specify a hierarchical prior
where the hyperparameters are carefully selected such that they are reflective
of the cluster structure aimed at. In addition this prior allows to estimate
the model using standard MCMC sampling methods. In combination with a
post-processing approach which resolves the label switching issue and results
in an identified model, our approach allows to simultaneously (1) determine the
number of clusters, (2) flexibly approximate the cluster distributions in a
semi-parametric way using finite mixtures of normals and (3) identify
cluster-specific parameters and classify observations. The proposed approach is
illustrated in two simulation studies and on benchmark data sets.Comment: 49 page
- …