81 research outputs found

    Efficient Sequential Monte-Carlo Samplers for Bayesian Inference

    Full text link
    In many problems, complex non-Gaussian and/or nonlinear models are required to accurately describe a physical system of interest. In such cases, Monte Carlo algorithms are remarkably flexible and extremely powerful approaches to solve such inference problems. However, in the presence of a high-dimensional and/or multimodal posterior distribution, it is widely documented that standard Monte-Carlo techniques could lead to poor performance. In this paper, the study is focused on a Sequential Monte-Carlo (SMC) sampler framework, a more robust and efficient Monte Carlo algorithm. Although this approach presents many advantages over traditional Monte-Carlo methods, the potential of this emergent technique is however largely underexploited in signal processing. In this work, we aim at proposing some novel strategies that will improve the efficiency and facilitate practical implementation of the SMC sampler specifically for signal processing applications. Firstly, we propose an automatic and adaptive strategy that selects the sequence of distributions within the SMC sampler that minimizes the asymptotic variance of the estimator of the posterior normalization constant. This is critical for performing model selection in modelling applications in Bayesian signal processing. The second original contribution we present improves the global efficiency of the SMC sampler by introducing a novel correction mechanism that allows the use of the particles generated through all the iterations of the algorithm (instead of only particles from the last iteration). This is a significant contribution as it removes the need to discard a large portion of the samples obtained, as is standard in standard SMC methods. This will improve estimation performance in practical settings where computational budget is important to consider.Comment: arXiv admin note: text overlap with arXiv:1303.3123 by other author

    Inference for Differential Equation Models using Relaxation via Dynamical Systems

    Full text link
    Statistical regression models whose mean functions are represented by ordinary differential equations (ODEs) can be used to describe phenomenons dynamical in nature, which are abundant in areas such as biology, climatology and genetics. The estimation of parameters of ODE based models is essential for understanding its dynamics, but the lack of an analytical solution of the ODE makes the parameter estimation challenging. The aim of this paper is to propose a general and fast framework of statistical inference for ODE based models by relaxation of the underlying ODE system. Relaxation is achieved by a properly chosen numerical procedure, such as the Runge-Kutta, and by introducing additive Gaussian noises with small variances. Consequently, filtering methods can be applied to obtain the posterior distribution of the parameters in the Bayesian framework. The main advantage of the proposed method is computation speed. In a simulation study, the proposed method was at least 14 times faster than the other methods. Theoretical results which guarantee the convergence of the posterior of the approximated dynamical system to the posterior of true model are presented. Explicit expressions are given that relate the order and the mesh size of the Runge-Kutta procedure to the rate of convergence of the approximated posterior as a function of sample size

    Parameter inference and model selection for differential equation models

    Get PDF
    Includes bibliographical references.2015 Summer.Firstly, we consider the problem of estimating parameters of stochastic differential equations with discrete-time observations that are either completely or partially observed. The transition density between two observations is generally unknown. We propose an importance sampling approach with an auxiliary parameter when the transition density is unknown. We embed the auxiliary importance sampler in a penalized maximum likelihood framework which produces more accurate and computationally efficient parameter estimates. Simulation studies in three different models illustrate promising improvements of the new penalized simulated maximum likelihood method. The new procedure is designed for the challenging case when some state variables are unobserved and moreover, observed states are sparse over time, which commonly arises in ecological studies. We apply this new approach to two epidemics of chronic wasting disease in mule deer. Next, we consider the problem of selecting deterministic or stochastic models for a biological, ecological, or environmental dynamical process. In most cases, one prefers either deterministic or stochastic models as candidate models based on experience or subjective judgment. Due to the complex or intractable likelihood in most dynamical models, likelihood-based approaches for model selection are not suitable. We use approximate Bayesian computation for parameter estimation and model selection to gain further understanding of the dynamics of two epidemics of chronic wasting disease in mule deer. The main novel contribution of this work is that under a hierarchical model framework we compare three types of dynamical models: ordinary differential equation, continuous time Markov chain, and stochastic differential equation models. To our knowledge model selection between these types of models has not appeared previously. The practice of incorporating dynamical models into data models is becoming more common, the proposed approach may be useful in a variety of applications. Lastly, we consider estimation of parameters in nonlinear ordinary differential equation models with measurement error where closed-form solutions are not available. We propose a new numerical algorithm, the data driven adaptive mesh method, which is a combination of the Euler and 4th order Runge-Kutta methods with different step sizes based on the observation time points. Our results show that both the accuracy in parameter estimation and computational cost of the new algorithm improve over the most widely used numerical algorithm, the 4th Runge-Kutta method. Moreover, the generalized profiling procedure proposed by Ramsay et al. (2007) doesn't have good performance for sparse data in time as compared to the new approach. We illustrate our approach with both simulation studies and ecological data on intestinal microbiota

    Some models are useful, but how do we know which ones? Towards a unified Bayesian model taxonomy

    Full text link
    Probabilistic (Bayesian) modeling has experienced a surge of applications in almost all quantitative sciences and industrial areas. This development is driven by a combination of several factors, including better probabilistic estimation algorithms, flexible software, increased computing power, and a growing awareness of the benefits of probabilistic learning. However, a principled Bayesian model building workflow is far from complete and many challenges remain. To aid future research and applications of a principled Bayesian workflow, we ask and provide answers for what we perceive as two fundamental questions of Bayesian modeling, namely (a) "What actually is a Bayesian model?" and (b) "What makes a good Bayesian model?". As an answer to the first question, we propose the PAD model taxonomy that defines four basic kinds of Bayesian models, each representing some combination of the assumed joint distribution of all (known or unknown) variables (P), a posterior approximator (A), and training data (D). As an answer to the second question, we propose ten utility dimensions according to which we can evaluate Bayesian models holistically, namely, (1) causal consistency, (2) parameter recoverability, (3) predictive performance, (4) fairness, (5) structural faithfulness, (6) parsimony, (7) interpretability, (8) convergence, (9) estimation speed, and (10) robustness. Further, we propose two example utility decision trees that describe hierarchies and trade-offs between utilities depending on the inferential goals that drive model building and testing
    • …
    corecore