301 research outputs found

    A Bernstein-von Mises theorem in the nonparametric right-censoring model

    Full text link
    In the recent Bayesian nonparametric literature, many examples have been reported in which Bayesian estimators and posterior distributions do not achieve the optimal convergence rate, indicating that the Bernstein-von Mises theorem does not hold. In this article, we give a positive result in this direction by showing that the Bernstein-von Mises theorem holds in survival models for a large class of prior processes neutral to the right. We also show that, for an arbitrarily given convergence rate n^{-\alpha} with 0<\alpha \leq 1/2, a prior process neutral to the right can be chosen so that its posterior distribution achieves the convergence rate n^{-\alpha}.Comment: Published by the Institute of Mathematical Statistics (http://www.imstat.org) in the Annals of Statistics (http://www.imstat.org/aos/) at http://dx.doi.org/10.1214/00905360400000052

    Inference for Differential Equation Models using Relaxation via Dynamical Systems

    Full text link
    Statistical regression models whose mean functions are represented by ordinary differential equations (ODEs) can be used to describe phenomenons dynamical in nature, which are abundant in areas such as biology, climatology and genetics. The estimation of parameters of ODE based models is essential for understanding its dynamics, but the lack of an analytical solution of the ODE makes the parameter estimation challenging. The aim of this paper is to propose a general and fast framework of statistical inference for ODE based models by relaxation of the underlying ODE system. Relaxation is achieved by a properly chosen numerical procedure, such as the Runge-Kutta, and by introducing additive Gaussian noises with small variances. Consequently, filtering methods can be applied to obtain the posterior distribution of the parameters in the Bayesian framework. The main advantage of the proposed method is computation speed. In a simulation study, the proposed method was at least 14 times faster than the other methods. Theoretical results which guarantee the convergence of the posterior of the approximated dynamical system to the posterior of true model are presented. Explicit expressions are given that relate the order and the mesh size of the Runge-Kutta procedure to the rate of convergence of the approximated posterior as a function of sample size

    Asymptotic Properties for Bayesian Neural Network in Besov Space

    Full text link
    Neural networks have shown great predictive power when dealing with various unstructured data such as images and natural languages. The Bayesian neural network captures the uncertainty of prediction by putting a prior distribution for the parameter of the model and computing the posterior distribution. In this paper, we show that the Bayesian neural network using spike-and-slab prior has consistency with nearly minimax convergence rate when the true regression function is in the Besov space. Even when the smoothness of the regression function is unknown the same posterior convergence rate holds and thus the spike-and-slab prior is adaptive to the smoothness of the regression function. We also consider the shrinkage prior, which is more feasible than other priors, and show that it has the same convergence rate. In other words, we propose a practical Bayesian neural network with guaranteed asymptotic properties
    corecore