69,467 research outputs found

    Reliability analysis of structures by active learning enhanced sparse Bayesian regression

    Get PDF
    Adaptive sampling near a limit state is important for metamodeling-based reliability analysis of structures involving an implicit limit state function. Active learning based on the posterior mean and standard deviation provided by a chosen metamodel is widely used for such adaptive sampling. Most studies on active learning-based reliability estimation methods use the Kriging approach, which provides prediction along with its variance. As with the Kriging approach, sparse Bayesian learning-based regression also provides posterior mean and standard deviation. Due to the sparsity involved in learning, it is expected to be computationally faster than the Kriging approach. Motivated by this, active learning-enhanced adaptive sampling-based sparse Bayesian regression is explored in the present study for reliability analysis. In doing so, polynomial basis functions, which do not involve free parameters, are chosen for the sparse Bayesian regression to avoid computationally expensive parameter tuning. The convergence of the proposed approach is attained based on the stabilization of 10 consecutive failure estimates. The effectiveness of the proposed adaptive sparse Bayesian regression approach is illustrated numerically with five examples

    Efficient Non-parametric Bayesian Hawkes Processes

    Full text link
    In this paper, we develop an efficient nonparametric Bayesian estimation of the kernel function of Hawkes processes. The non-parametric Bayesian approach is important because it provides flexible Hawkes kernels and quantifies their uncertainty. Our method is based on the cluster representation of Hawkes processes. Utilizing the stationarity of the Hawkes process, we efficiently sample random branching structures and thus, we split the Hawkes process into clusters of Poisson processes. We derive two algorithms -- a block Gibbs sampler and a maximum a posteriori estimator based on expectation maximization -- and we show that our methods have a linear time complexity, both theoretically and empirically. On synthetic data, we show our methods to be able to infer flexible Hawkes triggering kernels. On two large-scale Twitter diffusion datasets, we show that our methods outperform the current state-of-the-art in goodness-of-fit and that the time complexity is linear in the size of the dataset. We also observe that on diffusions related to online videos, the learned kernels reflect the perceived longevity for different content types such as music or pets videos
    • …
    corecore