17,738 research outputs found

    Truncation preconditioners for stochastic Galerkin finite element discretizations

    Get PDF
    Stochastic Galerkin finite element method (SGFEM) provides an efficient alternative to traditional sampling methods for the numerical solution of linear elliptic partial differential equations with parametric or random inputs. However, computing stochastic Galerkin approximations for a given problem requires the solution of large coupled systems of linear equations. Therefore, an effective and bespoke iterative solver is a key ingredient of any SGFEM implementation. In this paper, we analyze a class of truncation preconditioners for SGFEM. Extending the idea of the mean-based preconditioner, these preconditioners capture additional significant components of the stochastic Galerkin matrix. Focusing on the parametric diffusion equation as a model problem and assuming affine-parametric representation of the diffusion coefficient, we perform spectral analysis of the preconditioned matrices and establish optimality of truncation preconditioners with respect to SGFEM discretization parameters. Furthermore, we report the results of numerical experiments for model diffusion problems with affine and non-affine parametric representations of the coefficient. In particular, we look at the efficiency of the solver (in terms of iteration counts for solving the underlying linear systems) and compare truncation preconditioners with other existing preconditioners for stochastic Galerkin matrices, such as the mean-based and the Kronecker product ones.Comment: 27 pages, 6 table

    Factorizing the Stochastic Galerkin System

    Full text link
    Recent work has explored solver strategies for the linear system of equations arising from a spectral Galerkin approximation of the solution of PDEs with parameterized (or stochastic) inputs. We consider the related problem of a matrix equation whose matrix and right hand side depend on a set of parameters (e.g. a PDE with stochastic inputs semidiscretized in space) and examine the linear system arising from a similar Galerkin approximation of the solution. We derive a useful factorization of this system of equations, which yields bounds on the eigenvalues, clues to preconditioning, and a flexible implementation method for a wide array of problems. We complement this analysis with (i) a numerical study of preconditioners on a standard elliptic PDE test problem and (ii) a fluids application using existing CFD codes; the MATLAB codes used in the numerical studies are available online.Comment: 13 pages, 4 figures, 2 table

    A High Order Stochastic Asymptotic Preserving Scheme for Chemotaxis Kinetic Models with Random Inputs

    Get PDF
    In this paper, we develop a stochastic Asymptotic-Preserving (sAP) scheme for the kinetic chemotaxis system with random inputs, which will converge to the modified Keller-Segel model with random inputs in the diffusive regime. Based on the generalized Polynomial Chaos (gPC) approach, we design a high order stochastic Galerkin method using implicit-explicit (IMEX) Runge-Kutta (RK) time discretization with a macroscopic penalty term. The new schemes improve the parabolic CFL condition to a hyperbolic type when the mean free path is small, which shows significant efficiency especially in uncertainty quantification (UQ) with multi-scale problems. The stochastic Asymptotic-Preserving property will be shown asymptotically and verified numerically in several tests. Many other numerical tests are conducted to explore the effect of the randomness in the kinetic system, in the aim of providing more intuitions for the theoretic study of the chemotaxis models

    Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations

    Get PDF
    Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reduced-precision fixed-point arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of Ordinary Differential Equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixed-point arithmetic with stochastic rounding consistently results in smaller errors compared to single precision floating-point and fixed-point arithmetic with round-to-nearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the least significant bit (LSB) in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by Partial Differential Equations (PDEs).Comment: Submitted to Philosophical Transactions of the Royal Society

    High-order collocation methods for differential equations with random inputs

    Get PDF
    Recently there has been a growing interest in designing efficient methods for the solution of ordinary/ partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e. g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a high-order stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods

    Pricing options and computing implied volatilities using neural networks

    Full text link
    This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options and to calculate implied volatilities with the aim of accelerating the corresponding numerical methods. With ANNs being universal function approximators, this method trains an optimized ANN on a data set generated by a sophisticated financial model, and runs the trained ANN as an agent of the original solver in a fast and efficient way. We test this approach on three different types of solvers, including the analytic solution for the Black-Scholes equation, the COS method for the Heston stochastic volatility model and Brent's iterative root-finding method for the calculation of implied volatilities. The numerical results show that the ANN solver can reduce the computing time significantly
    • …
    corecore