496 research outputs found

    Control Variates for Reversible MCMC Samplers

    Full text link
    A general methodology is introduced for the construction and effective application of control variates to estimation problems involving data from reversible MCMC samplers. We propose the use of a specific class of functions as control variates, and we introduce a new, consistent estimator for the values of the coefficients of the optimal linear combination of these functions. The form and proposed construction of the control variates is derived from our solution of the Poisson equation associated with a specific MCMC scenario. The new estimator, which can be applied to the same MCMC sample, is derived from a novel, finite-dimensional, explicit representation for the optimal coefficients. The resulting variance-reduction methodology is primarily applicable when the simulated data are generated by a conjugate random-scan Gibbs sampler. MCMC examples of Bayesian inference problems demonstrate that the corresponding reduction in the estimation variance is significant, and that in some cases it can be quite dramatic. Extensions of this methodology in several directions are given, including certain families of Metropolis-Hastings samplers and hybrid Metropolis-within-Gibbs algorithms. Corresponding simulation examples are presented illustrating the utility of the proposed methods. All methodological and asymptotic arguments are rigorously justified under easily verifiable and essentially minimal conditions.Comment: 44 pages; 6 figures; 5 table

    A Bayesian Heteroscedastic GLM with Application to fMRI Data with Motion Spikes

    Full text link
    We propose a voxel-wise general linear model with autoregressive noise and heteroscedastic noise innovations (GLMH) for analyzing functional magnetic resonance imaging (fMRI) data. The model is analyzed from a Bayesian perspective and has the benefit of automatically down-weighting time points close to motion spikes in a data-driven manner. We develop a highly efficient Markov Chain Monte Carlo (MCMC) algorithm that allows for Bayesian variable selection among the regressors to model both the mean (i.e., the design matrix) and variance. This makes it possible to include a broad range of explanatory variables in both the mean and variance (e.g., time trends, activation stimuli, head motion parameters and their temporal derivatives), and to compute the posterior probability of inclusion from the MCMC output. Variable selection is also applied to the lags in the autoregressive noise process, making it possible to infer the lag order from the data simultaneously with all other model parameters. We use both simulated data and real fMRI data from OpenfMRI to illustrate the importance of proper modeling of heteroscedasticity in fMRI data analysis. Our results show that the GLMH tends to detect more brain activity, compared to its homoscedastic counterpart, by allowing the variance to change over time depending on the degree of head motion

    Computer model calibration with large non-stationary spatial outputs: application to the calibration of a climate model

    Get PDF
    Bayesian calibration of computer models tunes unknown input parameters by comparing outputs with observations. For model outputs that are distributed over space, this becomes computationally expensive because of the output size. To overcome this challenge, we employ a basis representation of the model outputs and observations: we match these decompositions to carry out the calibration efficiently. In the second step, we incorporate the non-stationary behaviour, in terms of spatial variations of both variance and correlations, in the calibration. We insert two integrated nested Laplace approximation-stochastic partial differential equation parameters into the calibration. A synthetic example and a climate model illustration highlight the benefits of our approach

    Connecting the Dots: Towards Continuous Time Hamiltonian Monte Carlo

    Get PDF
    Continuous time Hamiltonian Monte Carlo is introduced, as a powerful alternative to Markov chain Monte Carlo methods for continuous target distributions. The method is constructed in two steps: First Hamiltonian dynamics are chosen as the deterministic dynamics in a continuous time piecewise deterministic Markov process. Under very mild restrictions, such a process will have the desired target distribution as an invariant distribution. Secondly, the numerical implementation of such processes, based on adaptive numerical integration of second order ordinary differential equations is considered. The numerical implementation yields an approximate, yet highly robust algorithm that, unlike conventional Hamiltonian Monte Carlo, enables the exploitation of the complete Hamiltonian trajectories (hence the title). The proposed algorithm may yield large speedups and improvements in stability relative to relevant benchmarks, while incurring numerical errors that are negligible relative to the overall Monte Carlo errors

    Analysis of State-Independent Importance-Sampling Measures for the Two-Node Tandem Queue

    Get PDF
    We investigate the simulation of overflow of the total population of a Markovian two-node tandem queue model during a busy cycle, using importance sampling with a state-independent change of measure. We show that the only such change of measure that may possibly result in asymptotically efficient simulation for large overflow levels is exchanging the arrival rate with the smallest service rate. For this change of measure, we classify the model's parameter space into regions of asymptotic efficiency, exponential growth of the relative error, and infinite variance, using both analytical and numerical techniques

    On the use of the bayesian approach for the calibration, evaluation and comparison of process-based forest models

    Get PDF
    Doutoramento em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de AgronomiaForest ecosystems have been experiencing fast and abrupt changes in the environmental conditions, that can increase their vulnerability to extreme events such as drought, heat waves, storms, fire. Process-based models can draw inferences about future environmental dynamics, but the reliability and robustness of vegetation models are conditional on their structure and their parametrisation. The main objective of the PhD was to implement and apply modern computational techniques, mainly based on Bayesian statistics, in the context of forest modelling. A variety of case studies was presented, spanning from growth predictions models to soil respiration models and process-based models. The great potential of the Bayesian method for reducing uncertainty in parameters and outputs and model evaluation was shown. Furthermore, a new methodology based on a combination of a Bayesian framework and a global sensitivity analysis was developed, with the aim of identifying strengths and weaknesses of process-based models and to test modifications in model structure. Finally, part of the PhD research focused on reducing the computational load to take full advantage of Bayesian statistics. It was shown how parameter screening impacts model performances and a new methodology for parameter screening, based on canonical correlation analysis, was presente

    Huber Loss Reconstruction in Gradient-Domain Path Tracing

    Get PDF
    The focus of this thesis is to improve aspects related to the computational synthesis of photo-realistic images. Physically accurate images are generated by simulating the transportation of light between an observer and the light sources in a virtual environment. Path tracing is an algorithm that uses Monte Carlo methods to solve problems in the domain of light transport simulation, generating images by sampling light paths through the virtual scene. In this thesis we focus on the recently introduced gradient-domain path tracing algorithm. In addition to estimating the ordinary primal image, gradient-domain light transport algorithms also sample the horizontal and vertical gradients and solve a screened Poisson problem to reconstruct the final image. Using L2 loss for reconstruction produces an unbiased final image, but the results can often be visually unpleasing due to its sensitivity to extreme-value outliers in the sampled primal and gradient images. L1 loss can be used to suppress this sensitivity at the cost of introducing bias. We investigate the use of the Huber loss function in the reconstruction step of the gradient-domain path tracing algorithm. We show that using the Huber loss function for the gradient in the Poisson solver with a good choice of cut-off parameter can result in reduced sensitivity to outliers and consequently lower relative mean squared error than L1 or L2 when compared to ground-truth images. The main contribution of this thesis is a predictive multiplicative model for the cut-off parameter. The model takes as input pixel statistics, which can be computed on-line during sampling and predicts reconstruction parameters that on average outperforms reconstruction using L1 and L2
    corecore