72 research outputs found

    A Locally Adaptive Bayesian Cubature Method

    Get PDF
    Bayesian cubature (BC) is a popular inferential perspective on the cubature of expensive integrands, wherein the integrand is emulated using a stochastic process model. Several approaches have been put forward to encode sequential adaptation (i.e. dependence on previous integrand evaluations) into this framework. However, these proposals have been limited to either estimating the parameters of a stationary covariance model or focusing computational resources on regions where large values are taken by the integrand. In contrast, many classical adaptive cubature methods focus computational resources on spatial regions in which local error estimates are largest. The contributions of this work are three-fold: First, we present a theoretical result that suggests there does not exist a direct Bayesian analogue of the classical adaptive trapezoidal method. Then we put forward a novel BC method that has empirically similar behaviour to the adaptive trapezoidal method. Finally we present evidence that the novel method provides improved cubature performance, relative to standard BC, in a detailed empirical assessment

    The ridgelet prior: A covariance function approach to prior specification for bayesian neural networks

    Get PDF
    Bayesian neural networks attempt to combine the strong predictive performance of neural networks with formal quantification of uncertainty associated with the predictive output in the Bayesian framework. However, it remains unclear how to endow the parameters of the network with a prior distribution that is meaningful when lifted into the output space of the network. A possible solution is proposed that enables the user to posit an appropriate Gaussian process covariance function for the task at hand. Our approach constructs a prior distribution for the parameters of the network, called a ridgelet prior, that approximates the posited Gaussian process in the output space of the network. In contrast to existing work on the connection between neural networks and Gaussian processes, our analysis is non-asymptotic, with finite sample-size error bounds provided. This establishes the universality property that a Bayesian neural network can approximate any Gaussian process whose covariance function is sufficiently regular. Our experimental assessment is limited to a proof-of-concept, where we demonstrate that the ridgelet prior can out-perform an unstructured prior on regression problems for which a suitable Gaussian process prior can be provided

    A modern retrospective on probabilistic numerics

    Get PDF
    This article attempts to place the emergence of probabilistic numerics as a mathematical–statistical research field within its historical context and to explore how its gradual development can be related both to applications and to a modern formal treatment. We highlight in particular the parallel contributions of Sul′din and Larkin in the 1960s and how their pioneering early ideas have reached a degree of maturity in the intervening period, mediated by paradigms such as average-case analysis and information-based complexity. We provide a subjective assessment of the state of research in probabilistic numerics and highlight some difficulties to be addressed by future works

    Bayesian Filtering for Dynamic Systems with Applications to Tracking

    Get PDF
    This M.Sc. thesis intends to evaluate various algorithms based on Bayesian statistical theory and validates with both synthetic data as well as experimental data. The focus is given in comparing the performance of new kind of sequential Monte Carlo filter, called cost reference particle filter, with other Kalman based filters as well as the standard particle filter. Different filtering algorithms based on Kalman filters and those based on sequential Monte Carlo technique are implemented in Matlab. For all linear Gaussian system models, Kalman filter gives the optimal solution. Hence only the cases which do not have linear-Gaussian probabilistic model are analyzed in this thesis. The results of various simulations show that, for those non-linear system models whose probability model can fairly be assumed Gaussian, either Kalman like filters or the sequential Monte Carlo based particle filters can be used. The choice among these filters depends upon various factors such as degree of nonlinearity, order of system state, required accuracy, etc. There is always a tradeoff between the required accuracy and the computational cost. It is found that whenever the probabilistic model of the system cannot be approximated as Gaussian, which is the case in many real world applications like Econometrics, Genetics, etc., the above discussed statistical reference filters degrade in performance. To tackle with this problem, the recently proposed cost reference particle filter is implemented and tested in scenarios where the system model is not Gaussian. The new filter shows good robustness in such scenarios as it does not make any assumption of probabilistic model. The thesis work also includes implementation of the above discussed prediction algorithms into a real world application, where location of a moving robot is tracked using measurements from wireless sensor networks. The flexibility of the cost reference particle filter to adapt to specific applications is explored and is found to perform better than the other filters in tracking of the robot. The results obtained from various experiments show that cost reference particle filter is the best choice whenever there is high uncertainty of the probabilistic model and when these models are not Gaussian. It can also be concluded that,contrary to the general perception, the estimation techniques based on ad-hoc references can actually be more efficient than those based on the usual statistical reference
    • …
    corecore