243 research outputs found
Ten Digit Algorithms
This paper was presented as the A R Mitchell Lecture at the 2005 Dundee Biennial Conference on Numerical Analysis, 27 June 2005
Predictions for Scientific Computing Fifty Years from Now
This essay is adapted from a talk given June 17, 1998 at the conference "Numerical Analysis and Computers - 50 Years of Progress" held at the University of Manchester, England in commemoration of the 50th anniversary of the Mark 1 computer
Householder triangularization of a quasimatrix
A standard algorithm for computing the QR factorization of a matrix A is Householder triangularization. Here this idea is generalized to the situation in which A is a quasimatrix, that is, a “matrix” whose “columns” are functions defined on an interval [a,b]. Applications are mentioned to quasimatrix leastsquares fitting, singular value decomposition, and determination of ranks, norms, and condition numbers, and numerical illustrations are presented using the chebfun system
Is Gauss quadrature better than Clenshaw-Curtis?
We consider the question of whether Gauss quadrature, which is very famous, is more powerful than the much simpler Clenshaw-Curtis quadrature, which is less well-known. Seven-line MATLAB codes are presented that implement both methods, and experiments show that the supposed factor-of-2 advantage of Gauss quadrature is rarely realized. Theorems are given to explain this effect. First, following Elliott and O'Hara and Smith in the 1960s, the phenomenon is explained as a consequence of aliasing of coefficients in Chebyshev expansions. Then another explanation is offered based on the interpretation of a quadrature formula as a rational approximation of in the complex plane. Gauss quadrature corresponds to Pad\'e approximation at . Clenshaw-Curtis quadrature corresponds to an approximation whose order of accuracy at is only half as high, but which is nevertheless equally accurate near
Ten Digit Problems
Most quantitative mathematical problems cannot be solved exactly, but there are powerful algorithms for solving many of them numerically to a specified degree of precision like ten digits or ten thousand. In this article three difficult problems of this kind are presented, and the story is told of the SIAM 100-Dollar, 100-Digit Challenge. The twists and turns along the way illustrate some of the flavor of algorithmic continuous mathematics
Numerical Analysis
Acknowledgements: This article will appear in the forthcoming Princeton Companion to Mathematics, edited by Timothy Gowers with June Barrow-Green, to be published by Princeton University Press.\ud
\ud
In preparing this essay I have benefitted from the advice of many colleagues who corrected a number of errors of fact and emphasis. I have not always followed their advice, however, preferring as one friend put it, to "put my head above the parapet". So I must take full responsibility for errors and omissions here.\ud
\ud
With thanks to: Aurelio Arranz, Alexander Barnett, Carl de Boor, David Bindel, Jean-Marc Blanc, Mike Bochev, Folkmar Bornemann, Richard Brent, Martin Campbell-Kelly, Sam Clark, Tim Davis, Iain Duff, Stan Eisenstat, Don Estep, Janice Giudice, Gene Golub, Nick Gould, Tim Gowers, Anne Greenbaum, Leslie Greengard, Martin Gutknecht, Raphael Hauser, Des Higham, Nick Higham, Ilse Ipsen, Arieh Iserles, David Kincaid, Louis Komzsik, David Knezevic, Dirk Laurie, Randy LeVeque, Bill Morton, John C Nash, Michael Overton, Yoshio Oyanagi, Beresford Parlett, Linda Petzold, Bill Phillips, Mike Powell, Alex Prideaux, Siegfried Rump, Thomas Schmelzer, Thomas Sonar, Hans Stetter, Gil Strang, Endre Süli, Defeng Sun, Mike Sussman, Daniel Szyld, Garry Tee, Dmitry Vasilyev, Andy Wathen, Margaret Wright and Steve Wright
Gaussian elimination as an iterative algorithm
Gaussian elimination (GE) for solving an linear system of equations is the archetypical direct method of numerical linear algebra, as opposed to iterative. In this note we want to point out that GE has an iterative side too
Evaluating matrix functions for exponential integrators via Carathéodory-Fejér approximation and contour integrals
Among the fastest methods for solving stiff PDE are exponential integrators, which require the evaluation of , where is a negative definite matrix and is the exponential function or one of the related `` functions'' such as . Building on previous work by Trefethen and Gutknecht, Gonchar and Rakhmanov, and Lu, we propose two methods for the fast evaluation of that are especially useful when shifted systems can be solved efficiently, e.g. by a sparse direct solver. The first method method is based on best rational approximations to on the negative real axis computed via the Carathéodory-Fejér procedure, and we conjecture that the accuracy scales as , where is the number of complex matrix solves. In particular, three matrix solves suffice to evaluate to approximately six digits of accuracy. The second method is an application of the trapezoid rule on a Talbot-type contour
Representation of conformal maps by rational functions
The traditional view in numerical conformal mapping is that once the boundary
correspondence function has been found, the map and its inverse can be
evaluated by contour integrals. We propose that it is much simpler, and 10-1000
times faster, to represent the maps by rational functions computed by the AAA
algorithm. To justify this claim, first we prove a theorem establishing
root-exponential convergence of rational approximations near corners in a
conformal map, generalizing a result of D. J. Newman in 1964. This leads to the
new algorithm for approximating conformal maps of polygons. Then we turn to
smooth domains and prove a sequence of four theorems establishing that in any
conformal map of the unit circle onto a region with a long and slender part,
there must be a singularity or loss of univalence exponentially close to the
boundary, and polynomial approximations cannot be accurate unless of
exponentially high degree. This motivates the application of the new algorithm
to smooth domains, where it is again found to be highly effective
- …