13,208 research outputs found

    Global consensus Monte Carlo

    Get PDF
    To conduct Bayesian inference with large data sets, it is often convenient or necessary to distribute the data across multiple machines. We consider a likelihood function expressed as a product of terms, each associated with a subset of the data. Inspired by global variable consensus optimisation, we introduce an instrumental hierarchical model associating auxiliary statistical parameters with each term, which are conditionally independent given the top-level parameters. One of these top-level parameters controls the unconditional strength of association between the auxiliary parameters. This model leads to a distributed MCMC algorithm on an extended state space yielding approximations of posterior expectations. A trade-off between computational tractability and fidelity to the original model can be controlled by changing the association strength in the instrumental model. We further propose the use of a SMC sampler with a sequence of association strengths, allowing both the automatic determination of appropriate strengths and for a bias correction technique to be applied. In contrast to similar distributed Monte Carlo algorithms, this approach requires few distributional assumptions. The performance of the algorithms is illustrated with a number of simulated examples

    Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future

    Full text link
    Regularization and Bayesian methods for system identification have been repopularized in the recent years, and proved to be competitive w.r.t. classical parametric approaches. In this paper we shall make an attempt to illustrate how the use of regularization in system identification has evolved over the years, starting from the early contributions both in the Automatic Control as well as Econometrics and Statistics literature. In particular we shall discuss some fundamental issues such as compound estimation problems and exchangeability which play and important role in regularization and Bayesian approaches, as also illustrated in early publications in Statistics. The historical and foundational issues will be given more emphasis (and space), at the expense of the more recent developments which are only briefly discussed. The main reason for such a choice is that, while the recent literature is readily available, and surveys have already been published on the subject, in the author's opinion a clear link with past work had not been completely clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual Reviews in Contro

    Backstepping PDE Design: A Convex Optimization Approach

    Get PDF
    Abstract\u2014Backstepping design for boundary linear PDE is formulated as a convex optimization problem. Some classes of parabolic PDEs and a first-order hyperbolic PDE are studied, with particular attention to non-strict feedback structures. Based on the compactness of the Volterra and Fredholm-type operators involved, their Kernels are approximated via polynomial functions. The resulting Kernel-PDEs are optimized using Sumof- Squares (SOS) decomposition and solved via semidefinite programming, with sufficient precision to guarantee the stability of the system in the L2-norm. This formulation allows optimizing extra degrees of freedom where the Kernel-PDEs are included as constraints. Uniqueness and invertibility of the Fredholm-type transformation are proved for polynomial Kernels in the space of continuous functions. The effectiveness and limitations of the approach proposed are illustrated by numerical solutions of some Kernel-PDEs
    • …
    corecore