26,178 research outputs found

    Sparse bayesian polynomial chaos approximations of elasto-plastic material models

    Get PDF
    In this paper we studied the uncertainty quantification in a functional approximation form of elastoplastic models parameterised by material uncertainties. The problem of estimating the polynomial chaos coefficients is recast in a linear regression form by taking into consideration the possible sparsity of the solution. Departing from the classical optimisation point of view, we take a slightly different path by solving the problem in a Bayesian manner with the help of new spectral based sparse Kalman filter algorithms

    Bayesian changepoint analysis for atomic force microscopy and soft material indentation

    Full text link
    Material indentation studies, in which a probe is brought into controlled physical contact with an experimental sample, have long been a primary means by which scientists characterize the mechanical properties of materials. More recently, the advent of atomic force microscopy, which operates on the same fundamental principle, has in turn revolutionized the nanoscale analysis of soft biomaterials such as cells and tissues. This paper addresses the inferential problems associated with material indentation and atomic force microscopy, through a framework for the changepoint analysis of pre- and post-contact data that is applicable to experiments across a variety of physical scales. A hierarchical Bayesian model is proposed to account for experimentally observed changepoint smoothness constraints and measurement error variability, with efficient Monte Carlo methods developed and employed to realize inference via posterior sampling for parameters such as Young's modulus, a key quantifier of material stiffness. These results are the first to provide the materials science community with rigorous inference procedures and uncertainty quantification, via optimized and fully automated high-throughput algorithms, implemented as the publicly available software package BayesCP. To demonstrate the consistent accuracy and wide applicability of this approach, results are shown for a variety of data sets from both macro- and micro-materials experiments--including silicone, neurons, and red blood cells--conducted by the authors and others.Comment: 20 pages, 6 figures; submitted for publicatio

    On the Computational Complexity of MCMC-based Estimators in Large Samples

    Full text link
    In this paper we examine the implications of the statistical large sample theory for the computational complexity of Bayesian and quasi-Bayesian estimation carried out using Metropolis random walks. Our analysis is motivated by the Laplace-Bernstein-Von Mises central limit theorem, which states that in large samples the posterior or quasi-posterior approaches a normal density. Using the conditions required for the central limit theorem to hold, we establish polynomial bounds on the computational complexity of general Metropolis random walks methods in large samples. Our analysis covers cases where the underlying log-likelihood or extremum criterion function is possibly non-concave, discontinuous, and with increasing parameter dimension. However, the central limit theorem restricts the deviations from continuity and log-concavity of the log-likelihood or extremum criterion function in a very specific manner. Under minimal assumptions required for the central limit theorem to hold under the increasing parameter dimension, we show that the Metropolis algorithm is theoretically efficient even for the canonical Gaussian walk which is studied in detail. Specifically, we show that the running time of the algorithm in large samples is bounded in probability by a polynomial in the parameter dimension dd, and, in particular, is of stochastic order d2d^2 in the leading cases after the burn-in period. We then give applications to exponential families, curved exponential families, and Z-estimation of increasing dimension.Comment: 36 pages, 2 figure

    Bayesian Inference of Log Determinants

    Full text link
    The log-determinant of a kernel matrix appears in a variety of machine learning problems, ranging from determinantal point processes and generalized Markov random fields, through to the training of Gaussian processes. Exact calculation of this term is often intractable when the size of the kernel matrix exceeds a few thousand. In the spirit of probabilistic numerics, we reinterpret the problem of computing the log-determinant as a Bayesian inference problem. In particular, we combine prior knowledge in the form of bounds from matrix theory and evidence derived from stochastic trace estimation to obtain probabilistic estimates for the log-determinant and its associated uncertainty within a given computational budget. Beyond its novelty and theoretic appeal, the performance of our proposal is competitive with state-of-the-art approaches to approximating the log-determinant, while also quantifying the uncertainty due to budget-constrained evidence.Comment: 12 pages, 3 figure
    corecore