859 research outputs found

    Gaussian process modeling for stochastic multi-fidelity simulators, with application to fire safety

    Full text link
    To assess the possibility of evacuating a building in case of a fire, a standard method consists in simulating the propagation of fire, using finite difference methods and takes into account the random behavior of the fire, so that the result of a simulation is non-deterministic. The mesh fineness tunes the quality of the numerical model, and its computational cost. Depending on the mesh fineness, one simulation can last anywhere from a few minutes to several weeks. In this article, we focus on predicting the behavior of the fire simulator at fine meshes, using cheaper results, at coarser meshes. In the literature of the design and analysis of computer experiments, such a problem is referred to as multi-fidelity prediction. Our contribution is to extend to the case of stochastic simulators the Bayesian multi-fidelity model proposed by Picheny and Ginsbourger (2013) and Tuo et al. (2014)

    Iterative Updating of Model Error for Bayesian Inversion

    Get PDF
    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.Comment: 39 pages, 9 figure

    Desynchronization in diluted neural networks

    Full text link
    The dynamical behaviour of a weakly diluted fully-inhibitory network of pulse-coupled spiking neurons is investigated. Upon increasing the coupling strength, a transition from regular to stochastic-like regime is observed. In the weak-coupling phase, a periodic dynamics is rapidly approached, with all neurons firing with the same rate and mutually phase-locked. The strong-coupling phase is characterized by an irregular pattern, even though the maximum Lyapunov exponent is negative. The paradox is solved by drawing an analogy with the phenomenon of ``stable chaos'', i.e. by observing that the stochastic-like behaviour is "limited" to a an exponentially long (with the system size) transient. Remarkably, the transient dynamics turns out to be stationary.Comment: 11 pages, 13 figures, submitted to Phys. Rev.

    Fitting Effective Diffusion Models to Data Associated with a "Glassy Potential": Estimation, Classical Inference Procedures and Some Heuristics

    Full text link
    A variety of researchers have successfully obtained the parameters of low dimensional diffusion models using the data that comes out of atomistic simulations. This naturally raises a variety of questions about efficient estimation, goodness-of-fit tests, and confidence interval estimation. The first part of this article uses maximum likelihood estimation to obtain the parameters of a diffusion model from a scalar time series. I address numerical issues associated with attempting to realize asymptotic statistics results with moderate sample sizes in the presence of exact and approximated transition densities. Approximate transition densities are used because the analytic solution of a transition density associated with a parametric diffusion model is often unknown.I am primarily interested in how well the deterministic transition density expansions of Ait-Sahalia capture the curvature of the transition density in (idealized) situations that occur when one carries out simulations in the presence of a "glassy" interaction potential. Accurate approximation of the curvature of the transition density is desirable because it can be used to quantify the goodness-of-fit of the model and to calculate asymptotic confidence intervals of the estimated parameters. The second part of this paper contributes a heuristic estimation technique for approximating a nonlinear diffusion model. A "global" nonlinear model is obtained by taking a batch of time series and applying simple local models to portions of the data. I demonstrate the technique on a diffusion model with a known transition density and on data generated by the Stochastic Simulation Algorithm.Comment: 30 pages 10 figures Submitted to SIAM MMS (typos removed and slightly shortened

    Replication or exploration? Sequential design for stochastic simulation experiments

    Full text link
    We investigate the merits of replication, and provide methods for optimal design (including replicates), with the goal of obtaining globally accurate emulation of noisy computer simulation experiments. We first show that replication can be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroskedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology.Comment: 34 pages, 9 figure

    On the identification of non-stationary factor models and their application to atmospherical data analysis

    Get PDF
    A numerical framework for data-based identification of nonstationary linear factor models is presented. The approach is based on the extension of the recently developed method for identification of persistent dynamical phases in multidimensional time series, permitting the identification of discontinuous temporal changes in underlying model parameters. The finite element method (FEM) discretization of the resulting variational functional is applied to reduce the dimensionality of the resulting problem and to construct the numerical iterative algorithm. The presented method results in the sparse sequential linear minimization problem with linear constrains. The performance of the framework is demonstrated for the following two application examples: (i) in the context of subgrid-scale parameterization for the Lorenz model with external forcing and (ii) in an analysis of climate impact factors acting on the blocking events in the upper troposphere. The importance of accounting for the nonstationarity issue is demonstrated in the second application example: modeling the 40-yr ECMWF Re-Analysis (ERA-40) geopotential time series via a single best stochastic model with time-independent coefficients leads to the conclusion that all of the considered external factors are found to be statistically insignificant, whereas considering the nonstationary model (which is demonstrated to be more appropriate in the sense of information theory) identified by the methodology presented in the paper results in identification of statistically significant external impact factor influences

    Full Page Ads

    Get PDF
    • …
    corecore