12,422 research outputs found

    Bayesian Regression of Piecewise Constant Functions

    Full text link
    We derive an exact and efficient Bayesian regression algorithm for piecewise constant functions of unknown segment number, boundary location, and levels. It works for any noise and segment level prior, e.g. Cauchy which can handle outliers. We derive simple but good estimates for the in-segment variance. We also propose a Bayesian regression curve as a better way of smoothing data without blurring boundaries. The Bayesian approach also allows straightforward determination of the evidence, break probabilities and error estimates, useful for model selection and significance and robustness studies. We discuss the performance on synthetic and real-world examples. Many possible extensions will be discussed.Comment: 27 pages, 18 figures, 1 table, 3 algorithm

    Locally adaptive smoothing with Markov random fields and shrinkage priors

    Full text link
    We present a locally adaptive nonparametric curve fitting method that operates within a fully Bayesian framework. This method uses shrinkage priors to induce sparsity in order-k differences in the latent trend function, providing a combination of local adaptation and global control. Using a scale mixture of normals representation of shrinkage priors, we make explicit connections between our method and kth order Gaussian Markov random field smoothing. We call the resulting processes shrinkage prior Markov random fields (SPMRFs). We use Hamiltonian Monte Carlo to approximate the posterior distribution of model parameters because this method provides superior performance in the presence of the high dimensionality and strong parameter correlations exhibited by our models. We compare the performance of three prior formulations using simulated data and find the horseshoe prior provides the best compromise between bias and precision. We apply SPMRF models to two benchmark data examples frequently used to test nonparametric methods. We find that this method is flexible enough to accommodate a variety of data generating models and offers the adaptive properties and computational tractability to make it a useful addition to the Bayesian nonparametric toolbox.Comment: 38 pages, to appear in Bayesian Analysi

    Nonparametric Bayesian hazard rate models based on penalized splines

    Get PDF
    Extensions of the traditional Cox proportional hazard model, concerning the following features are often desirable in applications: Simultaneous nonparametric estimation of baseline hazard and usual fixed covariate effects, modelling and detection of time-varying covariate effects and nonlinear functional forms of metrical covariates, and inclusion of frailty components. In this paper, we develop Bayesian multiplicative hazard rate models for survival and event history data that can deal with these issues in a flexible and unified framework. Some simpler models, such as piecewise exponential models with a smoothed baseline hazard, are covered as special cases. Embedded in the counting process approach, nonparametric estimation of unknown nonlinear functional effects of time or covariates is based on Bayesian penalized splines. Inference is fully Bayesian and uses recent MCMC sampling schemes. Smoothing parameters are an integral part of the model and are estimated automatically. We investigate performance of our approach through simulation studies, and illustrate it with a real data application

    Bayesian nonparametric multivariate convex regression

    Full text link
    In many applications, such as economics, operations research and reinforcement learning, one often needs to estimate a multivariate regression function f subject to a convexity constraint. For example, in sequential decision processes the value of a state under optimal subsequent decisions may be known to be convex or concave. We propose a new Bayesian nonparametric multivariate approach based on characterizing the unknown regression function as the max of a random collection of unknown hyperplanes. This specification induces a prior with large support in a Kullback-Leibler sense on the space of convex functions, while also leading to strong posterior consistency. Although we assume that f is defined over R^p, we show that this model has a convergence rate of log(n)^{-1} n^{-1/(d+2)} under the empirical L2 norm when f actually maps a d dimensional linear subspace to R. We design an efficient reversible jump MCMC algorithm for posterior computation and demonstrate the methods through application to value function approximation
    • ā€¦
    corecore