3,714 research outputs found

    Numerical Analysis

    Get PDF
    Acknowledgements: This article will appear in the forthcoming Princeton Companion to Mathematics, edited by Timothy Gowers with June Barrow-Green, to be published by Princeton University Press.\ud \ud In preparing this essay I have benefitted from the advice of many colleagues who corrected a number of errors of fact and emphasis. I have not always followed their advice, however, preferring as one friend put it, to "put my head above the parapet". So I must take full responsibility for errors and omissions here.\ud \ud With thanks to: Aurelio Arranz, Alexander Barnett, Carl de Boor, David Bindel, Jean-Marc Blanc, Mike Bochev, Folkmar Bornemann, Richard Brent, Martin Campbell-Kelly, Sam Clark, Tim Davis, Iain Duff, Stan Eisenstat, Don Estep, Janice Giudice, Gene Golub, Nick Gould, Tim Gowers, Anne Greenbaum, Leslie Greengard, Martin Gutknecht, Raphael Hauser, Des Higham, Nick Higham, Ilse Ipsen, Arieh Iserles, David Kincaid, Louis Komzsik, David Knezevic, Dirk Laurie, Randy LeVeque, Bill Morton, John C Nash, Michael Overton, Yoshio Oyanagi, Beresford Parlett, Linda Petzold, Bill Phillips, Mike Powell, Alex Prideaux, Siegfried Rump, Thomas Schmelzer, Thomas Sonar, Hans Stetter, Gil Strang, Endre Süli, Defeng Sun, Mike Sussman, Daniel Szyld, Garry Tee, Dmitry Vasilyev, Andy Wathen, Margaret Wright and Steve Wright

    Automatic linearity detection

    Get PDF
    Given a function, or more generally an operator, the question "Is it linear?" seems simple to answer. In many applications of scientific computing it might be worth determining the answer to this question in an automated way; some functionality, such as operator exponentiation, is only defined for linear operators, and in other problems, time saving is available if it is known that the problem being solved is linear. Linearity detection is closely connected to sparsity detection of Hessians, so for large-scale applications, memory savings can be made if linearity information is known. However, implementing such an automated detection is not as straightforward as one might expect. This paper describes how automatic linearity detection can be implemented in combination with automatic differentiation, both for standard scientific computing software, and within the Chebfun software system. The key ingredients for the method are the observation that linear operators have constant derivatives, and the propagation of two logical vectors, â„“\ell and cc, as computations are carried out. The values of â„“\ell and cc are determined by whether output variables have constant derivatives and constant values with respect to each input variable. The propagation of their values through an evaluation trace of an operator yields the desired information about the linearity of that operator

    Automatic Frechet differentiation for the numerical solution of boundary-value problems

    Get PDF
    A new solver for nonlinear boundary-value problems (BVPs) in Matlab is presented, based on the Chebfun software system for representing functions and operators automatically as numerical objects. The solver implements Newton's method in function space, where instead of the usual Jacobian matrices, the derivatives involved are Frechet derivatives. A major novelty of this approach is the application of automatic differentiation (AD) techniques to compute the operator-valued Frechet derivatives in the continuous context. Other novelties include the use of anonymous functions and numbering of each variable to enable a recursive, delayed evaluation of derivatives with forward mode AD. The AD techniques are applied within a new Chebfun class called chebop which allows users to set up and solve nonlinear BVPs in a few lines of code, using the "nonlinear backslash" operator (\). This framework enables one to study the behaviour of Newton's method in function space

    On solving trust-region and other regularised subproblems in optimization

    Get PDF
    The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, Mor´e and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both globally and asymptotically at least superlinearly convergent in all cases, including in the notorious hard case. Numerical experiments validate the effectiveness of our approach. The resulting software is available as packages TRS and RQS as part of the GALAHAD optimization library, and is especially designed for large-scale problems

    Automating embedded analysis capabilities and managing software complexity in multiphysics simulation part I: template-based generic programming

    Full text link
    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering

    The chebop system for automatic solution of differential equations

    Get PDF
    In MATLAB, it would be good to be able to solve a linear differential equation by typing u = L\f, where f, u, and L are representations of the right-hand side, the solution, and the differential operator with boundary conditions. Similarly it would be good to be able to exponentiate an operator with expm(L) or determine eigenvalues and eigenfunctions with eigs(L). A system is described in which such calculations are indeed possible, based on the previously developed chebfun system in object-oriented MATLAB. The algorithms involved amount to spectral collocation methods on Chebyshev grids of automatically determined resolution

    Computing a partial Schur factorization of nonlinear eigenvalue problems using the infinite Arnoldi method

    Full text link
    The partial Schur factorization can be used to represent several eigenpairs of a matrix in a numerically robust way. Different adaptions of the Arnoldi method are often used to compute partial Schur factorizations. We propose here a technique to compute a partial Schur factorization of a nonlinear eigenvalue problem (NEP). The technique is inspired by the algorithm in [8], now called the infinite Arnoldi method. The infinite Arnoldi method is a method designed for NEPs, and can be interpreted as Arnoldi's method applied to a linear infinite-dimensional operator, whose reciprocal eigenvalues are the solutions to the NEP. As a first result we show that the invariant pairs of the operator are equivalent to invariant pairs of the NEP. We characterize the structure of the invariant pairs of the operator and show how one can carry out a modification of the infinite Arnoldi method by respecting the structure. This also allows us to naturally add the feature known as locking. We nest this algorithm with an outer iteration, where the infinite Arnoldi method for a particular type of structured functions is appropriately restarted. The restarting exploits the structure and is inspired by the well-known implicitly restarted Arnoldi method for standard eigenvalue problems. The final algorithm is applied to examples from a benchmark collection, showing that both processing time and memory consumption can be considerably reduced with the restarting technique

    Deterministic continutation of stochastic metastable equilibria via Lyapunov equations and ellipsoids

    Full text link
    Numerical continuation methods for deterministic dynamical systems have been one of the most successful tools in applied dynamical systems theory. Continuation techniques have been employed in all branches of the natural sciences as well as in engineering to analyze ordinary, partial and delay differential equations. Here we show that the deterministic continuation algorithm for equilibrium points can be extended to track information about metastable equilibrium points of stochastic differential equations (SDEs). We stress that we do not develop a new technical tool but that we combine results and methods from probability theory, dynamical systems, numerical analysis, optimization and control theory into an algorithm that augments classical equilibrium continuation methods. In particular, we use ellipsoids defining regions of high concentration of sample paths. It is shown that these ellipsoids and the distances between them can be efficiently calculated using iterative methods that take advantage of the numerical continuation framework. We apply our method to a bistable neural competition model and a classical predator-prey system. Furthermore, we show how global assumptions on the flow can be incorporated - if they are available - by relating numerical continuation, Kramers' formula and Rayleigh iteration.Comment: 29 pages, 7 figures [Fig.7 reduced in quality due to arXiv size restrictions]; v2 - added Section 9 on Kramers' formula, additional computations, corrected typos, improved explanation
    • …
    corecore