36 research outputs found

    A cutoff phenomenon in accelerated stochastic simulations of chemical kinetics via flow averaging (FLAVOR-SSA)

    Get PDF
    We present a simple algorithm for the simulation of stiff, discrete-space, continuous-time Markov processes. The algorithm is based on the concept of flow averaging for the integration of stiff ordinary and stochastic differential equations and ultimately leads to a straightforward variation of the the well-known stochastic simulation algorithm (SSA). The speedup that can be achieved by the present algorithm [flow averaging integrator SSA (FLAVOR-SSA)] over the classical SSA comes naturally at the expense of its accuracy. The error of the proposed method exhibits a cutoff phenomenon as a function of its speed-up, allowing for optimal tuning. Two numerical examples from chemical kinetics are provided to illustrate the efficiency of the method

    Flux norm approach to finite dimensional homogenization approximations with non-separated scales and high contrast

    Get PDF
    We consider divergence-form scalar elliptic equations and vectorial equations for elasticity with rough (L(Ω)L^\infty(\Omega), ΩRd\Omega \subset \R^d) coefficients a(x)a(x) that, in particular, model media with non-separated scales and high contrast in material properties. We define the flux norm as the L2L^2 norm of the potential part of the fluxes of solutions, which is equivalent to the usual H1H^1-norm. We show that in the flux norm, the error associated with approximating, in a properly defined finite-dimensional space, the set of solutions of the aforementioned PDEs with rough coefficients is equal to the error associated with approximating the set of solutions of the same type of PDEs with smooth coefficients in a standard space (e.g., piecewise polynomial). We refer to this property as the {\it transfer property}. A simple application of this property is the construction of finite dimensional approximation spaces with errors independent of the regularity and contrast of the coefficients and with optimal and explicit convergence rates. This transfer property also provides an alternative to the global harmonic change of coordinates for the homogenization of elliptic operators that can be extended to elasticity equations. The proofs of these homogenization results are based on a new class of elliptic inequalities which play the same role in our approach as the div-curl lemma in classical homogenization.Comment: Accepted for publication in Archives for Rational Mechanics and Analysi

    Variational Multiscale Stabilization and the Exponential Decay of Fine-scale Correctors

    Full text link
    This paper addresses the variational multiscale stabilization of standard finite element methods for linear partial differential equations that exhibit multiscale features. The stabilization is of Petrov-Galerkin type with a standard finite element trial space and a problem-dependent test space based on pre-computed fine-scale correctors. The exponential decay of these correctors and their localisation to local cell problems is rigorously justified. The stabilization eliminates scale-dependent pre-asymptotic effects as they appear for standard finite element discretizations of highly oscillatory problems, e.g., the poor L2L^2 approximation in homogenization problems or the pollution effect in high-frequency acoustic scattering

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    Rigorous model-based uncertainty quantification with application to terminal ballistics, part I: Systems with controllable inputs and small scatter

    No full text
    This work is concerned with establishing the feasibility of a data-on-demand (DoD) uncertainty quantification (UQ) protocol based on concentration-of-measure inequalities. Specific aims are to establish the feasibility of the protocol and its basic properties, including the tightness of the predictions afforded by the protocol. The assessment is based on an application to terminal ballistics and a specific system configuration consisting of 6061-T6 aluminum plates struck by spherical S-2 tool steel projectiles at ballistic impact speeds. The system's inputs are the plate thickness and impact velocity and the perforation area is chosen as the sole performance measure of the system. The objective of the UQ analysis is to certify the lethality of the projectile, i.e., that the projectile perforates the plate with high probability over a prespecified range of impact velocities and plate thicknesses. The net outcome of the UQ analysis is an M/U ratio, or confidence factor, of 2.93, indicative of a small probability of no perforation of the plate over its entire operating range. The high-confidence (>99.9%) in the successful operation of the system afforded the analysis and the small number of tests (40) required for the determination of the modeling-error diameter, establishes the feasibility of the DoD UQ protocol as a rigorous yet practical approach for model-based certification of complex systems

    Rigorous model-based uncertainty quantification with application to terminal ballistics—Part II. Systems with uncontrollable inputs and large scatter

    No full text
    This Part II of this series is concerned with establishing the feasibility of an extended data-on-demand (XDoD) uncertainty quantification (UQ) protocol based on concentration-of-measure inequalities and martingale theory. Specific aims are to establish the feasibility of the protocol and its basic properties, including the tightness of the predictions afforded by the protocol. The assessment is based on an application to terminal ballistics and a specific system configuration consisting of 6061-T6 aluminum plates struck by spherical 440c stainless steel projectiles at ballistic impact speeds in the range of 2.4–2.8 km/s. The system's inputs are the plate thickness, plate obliquity and impact velocity. The perforation area is chosen as the sole performance measure of the system. The objective of the UQ analysis is to certify the lethality of the projectile, i.e., that the projectile perforates the plate with high probability over a prespecified range of impact velocities, plate thicknesses and plate obliquities. All tests were conducted at Caltech's Small Particle Hypervelocity Range (SPHIR), which houses a two-stage gas gun. A feature of this facility is that the impact velocity, while amenable to precise measurement, cannot be controlled precisely but varies randomly according to a known probability density function. In addition, due to a competition between petalling and plugging mechanisms for the material system under consideration, the measured perforation area exhibits considerable scatter. The analysis establishes the feasibility of the XDoD UQ protocol as a rigorous yet practical approach for model-based certification of complex systems characterized by uncontrollable inputs and noisy experimental data
    corecore