22,331 research outputs found

    Optimal designs for random effect models with correlated errors with applications in population pharmacokinetics

    Get PDF
    We consider the problem of constructing optimal designs for population pharmacokinetics which use random effect models. It is common practice in the design of experiments in such studies to assume uncorrelated errors for each subject. In the present paper a new approach is introduced to determine efficient designs for nonlinear least squares estimation which addresses the problem of correlation between observations corresponding to the same subject. We use asymptotic arguments to derive optimal design densities, and the designs for finite sample sizes are constructed from the quantiles of the corresponding optimal distribution function. It is demonstrated that compared to the optimal exact designs, whose determination is a hard numerical problem, these designs are very efficient. Alternatively, the designs derived from asymptotic theory could be used as starting designs for the numerical computation of exact optimal designs. Several examples of linear and nonlinear models are presented in order to illustrate the methodology. In particular, it is demonstrated that naively chosen equally spaced designs may lead to less accurate estimation.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS324 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Box Particle Filter for Stochastic and Set-theoretic Measurements with Association Uncertainty

    Get PDF
    This work develops a novel estimation approach for nonlinear dynamic stochastic systems by combining the sequential Monte Carlo method with interval analysis. Unlike the common pointwise measurements, the proposed solution is for problems with interval measurements with association uncertainty. The optimal theoretical solution can be formulated in the framework of random set theory as the Bernoulli filter for interval measurements. The straightforward particle filter implementation of the Bernoulli filter typically requires a huge number of particles since the posterior probability density function occupies a significant portion of the state space. In order to reduce the number of particles, without necessarily sacrificing estimation accuracy, the paper investigates an implementation based on box particles. A box particle occupies a small and controllable rectangular region of non-zero volume in the target state space. The numerical results demonstrate that the filter performs remarkably well: both target state and target presence are estimated reliably using a very small number of box particles

    Gradient-free Hamiltonian Monte Carlo with Efficient Kernel Exponential Families

    Get PDF
    We propose Kernel Hamiltonian Monte Carlo (KMC), a gradient-free adaptive MCMC algorithm based on Hamiltonian Monte Carlo (HMC). On target densities where classical HMC is not an option due to intractable gradients, KMC adaptively learns the target's gradient structure by fitting an exponential family model in a Reproducing Kernel Hilbert Space. Computational costs are reduced by two novel efficient approximations to this gradient. While being asymptotically exact, KMC mimics HMC in terms of sampling efficiency, and offers substantial mixing improvements over state-of-the-art gradient free samplers. We support our claims with experimental studies on both toy and real-world applications, including Approximate Bayesian Computation and exact-approximate MCMC.Comment: 20 pages, 7 figure

    Fitting Effective Diffusion Models to Data Associated with a "Glassy Potential": Estimation, Classical Inference Procedures and Some Heuristics

    Full text link
    A variety of researchers have successfully obtained the parameters of low dimensional diffusion models using the data that comes out of atomistic simulations. This naturally raises a variety of questions about efficient estimation, goodness-of-fit tests, and confidence interval estimation. The first part of this article uses maximum likelihood estimation to obtain the parameters of a diffusion model from a scalar time series. I address numerical issues associated with attempting to realize asymptotic statistics results with moderate sample sizes in the presence of exact and approximated transition densities. Approximate transition densities are used because the analytic solution of a transition density associated with a parametric diffusion model is often unknown.I am primarily interested in how well the deterministic transition density expansions of Ait-Sahalia capture the curvature of the transition density in (idealized) situations that occur when one carries out simulations in the presence of a "glassy" interaction potential. Accurate approximation of the curvature of the transition density is desirable because it can be used to quantify the goodness-of-fit of the model and to calculate asymptotic confidence intervals of the estimated parameters. The second part of this paper contributes a heuristic estimation technique for approximating a nonlinear diffusion model. A "global" nonlinear model is obtained by taking a batch of time series and applying simple local models to portions of the data. I demonstrate the technique on a diffusion model with a known transition density and on data generated by the Stochastic Simulation Algorithm.Comment: 30 pages 10 figures Submitted to SIAM MMS (typos removed and slightly shortened

    Nonlinear State-Space Models for Microeconometric Panel Data

    Get PDF
    In applied microeconometric panel data analyses, time-constant random effects and first-order Markov chains are the most prevalent structures to account for intertemporal correlations in limited dependent variable models. An example from health economics shows that the addition of a simple autoregressive error terms leads to a more plausible and parsimonious model which also captures the dynamic features better. The computational problems encountered in the estimation of such models - and a broader class formulated in the framework of nonlinear state space models - hampers their widespread use. This paper discusses the application of different nonlinear filtering approaches developed in the time-series literature to these models and suggests that a straightforward algorithm based on sequential Gaussian quadrature can be expected to perform well in this setting. This conjecture is impressively confirmed by an extensive analysis of the example application

    Sequential Bayesian inference for static parameters in dynamic state space models

    Full text link
    A method for sequential Bayesian inference of the static parameters of a dynamic state space model is proposed. The method is based on the observation that many dynamic state space models have a relatively small number of static parameters (or hyper-parameters), so that in principle the posterior can be computed and stored on a discrete grid of practical size which can be tracked dynamically. Further to this, this approach is able to use any existing methodology which computes the filtering and prediction distributions of the state process. Kalman filter and its extensions to non-linear/non-Gaussian situations have been used in this paper. This is illustrated using several applications: linear Gaussian model, Binomial model, stochastic volatility model and the extremely non-linear univariate non-stationary growth model. Performance has been compared to both existing on-line method and off-line methods

    Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    Full text link
    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications
    corecore