270 research outputs found
Classical and Bayesian Analysis of Univariate and Multivariate Stochastic Volatility Models
In this paper, Efficient Importance Sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate Stochastic Volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother a Bayesian Markov Chain Monte Carlo (MCDC) posterior analysis of the parameters of SV models can be performed.
Classical and Bayesian Analysis of Univariate and Multivariate Stochastic Volatility Models
In this paper Efficient Importance Sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate Stochastic Volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother a Bayesian Markov Chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed. --Dynamic Latent Variables,Markov Chain Monte Carlo,Maximum likelihood,Simulation Smoother
Patterns of Scalable Bayesian Inference
Datasets are growing not just in size but in complexity, creating a demand
for rich models and quantification of uncertainty. Bayesian methods are an
excellent fit for this demand, but scaling Bayesian inference is a challenge.
In response to this challenge, there has been considerable recent work based on
varying assumptions about model structure, underlying computational resources,
and the importance of asymptotic correctness. As a result, there is a zoo of
ideas with few clear overarching principles.
In this paper, we seek to identify unifying principles, patterns, and
intuitions for scaling Bayesian inference. We review existing work on utilizing
modern computing resources with both MCMC and variational approximation
techniques. From this taxonomy of ideas, we characterize the general principles
that have proven successful for designing scalable inference procedures and
comment on the path forward
Eco-label Adoption in an Interdependent World
The growing popularity of national efforts to promote eco-labeling raises important questions. In particular, developing countries fear that the eco-label can deliberately impose the environmental concern of (high income) importing countries on their production methods. Yet, empirical studies of the adoption of eco-labelling schemes at the cross-country level are scarce due to the lack of data availability. In this paper, the decision to introduce an eco-label is analyzed through a heteroskedastic Bayesian spatial probit, which allows the government’s decision to introduce an eco-label to be influenced by the behaviour of the neighbouring countries. The estimation is performed by extending the joint updating approach proposed by Holmes & Held (2006) to a spatial framework. Empirical evidence highlights the importance of a high stage of development, innovation experience and potential scale effects in the implementation of an eco-label scheme. In addition, results confirm the existence of a strategic interdependence in the eco-label decision.Bayesian Spatial Probit, International Trade, Environmental Policy, Eco-labelling
Eco-label Adoption in an Interdependent World
The growing popularity of national efforts to promote eco-labeling raises important questions. In particular, developing countries fear that the eco-label can deliberately impose the environmental concern of (high income) importing countries on their production methods. Yet, empirical studies of the adoption of eco-labelling schemes at the cross-country level are scarce due to the lack of data availability. In this paper, the decision to introduce an eco-label is analyzed through a heteroskedastic Bayesian spatial probit, which allows the government’s decision to introduce an eco-label to be influenced by the behaviour of the neighbouring countries. The estimation is performed by extending the joint updating approach proposed by Holmes & Held (2006) to a spatial framework. Empirical evidence highlights the importance of a high stage of development, innovation experience and potential scale effects in the implementation of an eco-label scheme. In addition, results confirm the existence of a strategic interdependence in the eco-label decision.Bayesian Spatial Probit, International Trade, Environmental Policy, Eco-labelling
Multitarget tracking with interacting population-based MCMC-PF
In this paper we address the problem of tracking multiple targets based on raw measurements by means of Particle filtering. This strategy leads to a high computational complexity as the number of targets increases, so that an efficient implementation of the tracker is necessary. We propose a new multitarget Particle Filter (PF) that solves such challenging problem. We call our filter Interacting Population-based MCMC-PF (IP-MCMC-PF) since our approach is based on parallel usage of multiple population-based Metropolis-Hastings (M-H) samplers. Furthermore, to improve the chains mixing properties, we exploit genetic alike moves performing interaction between the Markov Chain Monte Carlo (MCMC) chains. Simulation analyses verify a dramatic reduction in terms of computational time for a given track accuracy, and an increased robustness w.r.t. conventional MCMC based PF
Scalable Inference of Customer Similarities from Interactions Data using Dirichlet Processes
Under the sociological theory of homophily, people who are similar to one
another are more likely to interact with one another. Marketers often have
access to data on interactions among customers from which, with homophily as a
guiding principle, inferences could be made about the underlying similarities.
However, larger networks face a quadratic explosion in the number of potential
interactions that need to be modeled. This scalability problem renders
probability models of social interactions computationally infeasible for all
but the smallest networks. In this paper we develop a probabilistic framework
for modeling customer interactions that is both grounded in the theory of
homophily, and is flexible enough to account for random variation in who
interacts with whom. In particular, we present a novel Bayesian nonparametric
approach, using Dirichlet processes, to moderate the scalability problems that
marketing researchers encounter when working with networked data. We find that
this framework is a powerful way to draw insights into latent similarities of
customers, and we discuss how marketers can apply these insights to
segmentation and targeting activities
- …