11,371 research outputs found

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences

    Ecological non-linear state space model selection via adaptive particle Markov chain Monte Carlo (AdPMCMC)

    Full text link
    We develop a novel advanced Particle Markov chain Monte Carlo algorithm that is capable of sampling from the posterior distribution of non-linear state space models for both the unobserved latent states and the unknown model parameters. We apply this novel methodology to five population growth models, including models with strong and weak Allee effects, and test if it can efficiently sample from the complex likelihood surface that is often associated with these models. Utilising real and also synthetically generated data sets we examine the extent to which observation noise and process error may frustrate efforts to choose between these models. Our novel algorithm involves an Adaptive Metropolis proposal combined with an SIR Particle MCMC algorithm (AdPMCMC). We show that the AdPMCMC algorithm samples complex, high-dimensional spaces efficiently, and is therefore superior to standard Gibbs or Metropolis Hastings algorithms that are known to converge very slowly when applied to the non-linear state space ecological models considered in this paper. Additionally, we show how the AdPMCMC algorithm can be used to recursively estimate the Bayesian Cram\'er-Rao Lower Bound of Tichavsk\'y (1998). We derive expressions for these Cram\'er-Rao Bounds and estimate them for the models considered. Our results demonstrate a number of important features of common population growth models, most notably their multi-modal posterior surfaces and dependence between the static and dynamic parameters. We conclude by sampling from the posterior distribution of each of the models, and use Bayes factors to highlight how observation noise significantly diminishes our ability to select among some of the models, particularly those that are designed to reproduce an Allee effect

    Serial Correlations in Single-Subject fMRI with Sub-Second TR

    Full text link
    When performing statistical analysis of single-subject fMRI data, serial correlations need to be taken into account to allow for valid inference. Otherwise, the variability in the parameter estimates might be under-estimated resulting in increased false-positive rates. Serial correlations in fMRI data are commonly characterized in terms of a first-order autoregressive (AR) process and then removed via pre-whitening. The required noise model for the pre-whitening depends on a number of parameters, particularly the repetition time (TR). Here we investigate how the sub-second temporal resolution provided by simultaneous multislice (SMS) imaging changes the noise structure in fMRI time series. We fit a higher-order AR model and then estimate the optimal AR model order for a sequence with a TR of less than 600 ms providing whole brain coverage. We show that physiological noise modelling successfully reduces the required AR model order, but remaining serial correlations necessitate an advanced noise model. We conclude that commonly used noise models, such as the AR(1) model, are inadequate for modelling serial correlations in fMRI using sub-second TRs. Rather, physiological noise modelling in combination with advanced pre-whitening schemes enable valid inference in single-subject analysis using fast fMRI sequences

    Efficient state-space inference of periodic latent force models

    Get PDF
    Latent force models (LFM) are principled approaches to incorporating solutions to differen-tial equations within non-parametric inference methods. Unfortunately, the developmentand application of LFMs can be inhibited by their computational cost, especially whenclosed-form solutions for the LFM are unavailable, as is the case in many real world prob-lems where these latent forces exhibit periodic behaviour. Given this, we develop a newsparse representation of LFMs which considerably improves their computational efficiency,as well as broadening their applicability, in a principled way, to domains with periodic ornear periodic latent forces. Our approach uses a linear basis model to approximate onegenerative model for each periodic force. We assume that the latent forces are generatedfrom Gaussian process priors and develop a linear basis model which fully expresses thesepriors. We apply our approach to model the thermal dynamics of domestic buildings andshow that it is effective at predicting day-ahead temperatures within the homes. We alsoapply our approach within queueing theory in which quasi-periodic arrival rates are mod-elled as latent forces. In both cases, we demonstrate that our approach can be implemented efficiently using state-space methods which encode the linear dynamic systems via LFMs.Further, we show that state estimates obtained using periodic latent force models can re-duce the root mean squared error to 17% of that from non-periodic models and 27% of thenearest rival approach which is the resonator model (S ̈arkk ̈a et al., 2012; Hartikainen et al.,2012.
    • …
    corecore