25,985 research outputs found

    Bayesian Inference on Dynamic Models with Latent Factors

    Get PDF
    In time series analysis, latent factors are often introduced to model the heterogeneous time evolution of the observed processes. The presence of unobserved components makes the maximum likelihood estimation method more difficult to apply. A Bayesian approach can sometimes be preferable since it permits to treat general state space models and makes easier the simulation based approach to parameters estimation and latent factors filtering. The paper examines economic time series models in a Bayesian perspective focusing, through some examples, on the extraction of the business cycle components. We briefly review some general univariate Bayesian dynamic models and discuss the simulation based techniques, such as Gibbs sampling, adaptive importance sampling and finally suggest the use of the particle filter, for parameter estimation and latent factor extraction.Bayesian Dynamic Models, Simulation Based Inference, Particle Filters, Latent Factors, Business Cycle

    The adaptive patched cubature filter and its implementation

    Full text link
    There are numerous contexts where one wishes to describe the state of a randomly evolving system. Effective solutions combine models that quantify the underlying uncertainty with available observational data to form scientifically reasonable estimates for the uncertainty in the system state. Stochastic differential equations are often used to mathematically model the underlying system. The Kusuoka-Lyons-Victoir (KLV) approach is a higher order particle method for approximating the weak solution of a stochastic differential equation that uses a weighted set of scenarios to approximate the evolving probability distribution to a high order of accuracy. The algorithm can be performed by integrating along a number of carefully selected bounded variation paths. The iterated application of the KLV method has a tendency for the number of particles to increase. This can be addressed and, together with local dynamic recombination, which simplifies the support of discrete measure without harming the accuracy of the approximation, the KLV method becomes eligible to solve the filtering problem in contexts where one desires to maintain an accurate description of the ever-evolving conditioned measure. In addition to the alternate application of the KLV method and recombination, we make use of the smooth nature of the likelihood function and high order accuracy of the approximations to lead some of the particles immediately to the next observation time and to build into the algorithm a form of automatic high order adaptive importance sampling.Comment: to appear in Communications in Mathematical Sciences. arXiv admin note: substantial text overlap with arXiv:1311.675

    Group Importance Sampling for Particle Filtering and MCMC

    Full text link
    Bayesian methods and their implementations by means of sophisticated Monte Carlo techniques have become very popular in signal processing over the last years. Importance Sampling (IS) is a well-known Monte Carlo technique that approximates integrals involving a posterior distribution by means of weighted samples. In this work, we study the assignation of a single weighted sample which compresses the information contained in a population of weighted samples. Part of the theory that we present as Group Importance Sampling (GIS) has been employed implicitly in different works in the literature. The provided analysis yields several theoretical and practical consequences. For instance, we discuss the application of GIS into the Sequential Importance Resampling framework and show that Independent Multiple Try Metropolis schemes can be interpreted as a standard Metropolis-Hastings algorithm, following the GIS approach. We also introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS. The first one, named Group Metropolis Sampling method, produces a Markov chain of sets of weighted samples. All these sets are then employed for obtaining a unique global estimator. The second one is the Distributed Particle Metropolis-Hastings technique, where different parallel particle filters are jointly used to drive an MCMC algorithm. Different resampled trajectories are compared and then tested with a proper acceptance probability. The novel schemes are tested in different numerical experiments such as learning the hyperparameters of Gaussian Processes, two localization problems in a wireless sensor network (with synthetic and real data) and the tracking of vegetation parameters given satellite observations, where they are compared with several benchmark Monte Carlo techniques. Three illustrative Matlab demos are also provided.Comment: To appear in Digital Signal Processing. Related Matlab demos are provided at https://github.com/lukafree/GIS.gi
    • …
    corecore