191 research outputs found
Compact Finite Differences and Cubic Splines
In this paper I uncover and explain---using contour integrals and
residues---a connection between cubic splines and a popular compact finite
difference formula. The connection is that on a uniform mesh the simplest
Pad\'e scheme for generating fourth-order accurate compact finite differences
gives \textsl{exactly} the derivatives at the interior nodes needed to
guarantee twice-continuous differentiability for cubic splines. %I found this
connection surprising, because the two problems being solved are different. I
also introduce an apparently new spline-like interpolant that I call a compact
cubic interpolant; this is similar to one introduced in 1972 by Swartz and
Varga, but has higher order accuracy at the edges. I argue that for mildly
nonuniform meshes the compact cubic approach offers some potential advantages,
and even for uniform meshes offers a simple way to treat the edge conditions,
relieving the user of the burden of deciding to use one of the three standard
options: free (natural), complete (clamped), or "not-a-knot" conditions.
Finally, I establish that the matrices defining the compact cubic splines
(equivalently, the fourth-order compact finite difference formul\ae) are
positive definite, and in fact totally nonnegative, if all mesh widths are the
same sign.Comment: Revised and corrected version. 25 pages, 4 figures; keywords: compact
finite differences; cubic splines; barycentric form; compact cubic splines;
contour integral methods; totally nonnegative matrice
Optimal Solution of Linear Ordinary Differential Equations by Conjugate Gradient Method
Solving initial value problems and boundary value problems of Linear Ordinary
Differential Equations (ODEs) plays an important role in many applications.
There are various numerical methods and solvers to obtain approximate solutions
represented by points. However, few work about optimal solution to minimize the
residual can be found in the literatures. In this paper, we first use Hermit
cubic spline interpolation at mesh points to represent the solution, then we
define the residual error as the square of the L2 norm of the residual obtained
by substituting the interpolation solution back to ODEs. Thus, solving ODEs is
reduced to an optimization problem in curtain solution space which can be
solved by conjugate gradient method with taking advantages of sparsity of the
corresponding matrix. The examples of IVP and BVP in the paper show that this
method can find a solution with smaller global error without additional mesh
points.Comment: 9 pages,6 figure
Narayana, Mandelbrot, and A New Kind of Companion Matrix
We demonstrate a new kind of companion matrix, for polynomials of the form
where upper Hessenberg
companions are known for the polynomials and . This
construction can generate companion matrices with smaller entries than the
Fiedler or Frobenius forms. This generalizes Piers Lawrence's Mandelbrot
companion matrix. We motivate the construction by use of Narayana-Mandelbrot
polynomials, which are also new to this paper
Differentiation Matrices for Univariate Polynomials
We collect here elementary properties of differentiation matrices for
univariate polynomials expressed in various bases, including orthogonal
polynomial bases and non-degree-graded bases such as Bernstein bases and
Lagrange \& Hermite interpolational bases.Comment: 14 pages, two figure
Minimal height companion matrices for Euclid polynomials
We define Euclid polynomials and in analogy to Euclid numbers . We show how to
construct companion matrices , so , of height 1
(and thus of minimal height over all integer companion matrices for
). We prove various properties of these objects, and give
experimental confirmation of some unproved properties.Comment: 15 pages, 7 figure
Pure tone modes for a 5:3 elliptic drum
The paper exhibits several standing modes of a 5:3 elliptic drum computed
using Mathieu functions. To match the boundary conditions, I used Newton's
method on the appropriate modified Mathieu equation using the Squire-Trapp
formula for computing derivatives. I tabulate the requisite values of the
parameter for these low-frequency modes.Comment: 12 pages; 56 figurs; two table
Inverse Cubic Iteration
There are thousands of papers on rootfinding for nonlinear scalar equations.
Here is one more, to
talk about an apparently new method, which I call ``Inverse Cubic Iteration''
(ICI) in analogy to the Inverse Quadratic Iteration in Richard Brent's zeroin
method. The possibly new method is based on a cubic blend of tangent-line
approximations for the inverse function. We rewrite this iteration for
numerical stability as an average of two Newton steps and a secant step: only
one new function evaluation and derivative evaluation is needed for each step.
The total cost of the method is therefore only trivially more than Newton's
method, and we will see that it has order , thus
ensuring that to achieve a given accuracy it usually takes fewer steps than
Newton's method while using essentially the same effort per step.Comment: 12 pages, 4 figure
Stieltjes, Poisson and other integral representations for functions of Lambert
We show that many functions containing are Stieltjes functions. Explicit
Stieltjes integrals are given for functions , , and others. We
also prove a generalization of a conjecture of Jackson, Procacci & Sokal.
Integral representations of and related functions are also given which are
associated with the properties of their being Pick or Bernstein functions.
Representations based on Poisson and Burniston--Siewert integrals are given as
well
Symbolic-Numeric Integration of Rational Functions
We consider the problem of symbolic-numeric integration of symbolic
functions, focusing on rational functions. Using a hybrid method allows the
stable yet efficient computation of symbolic antiderivatives while avoiding
issues of ill-conditioning to which numerical methods are susceptible. We
propose two alternative methods for exact input that compute the rational part
of the integral using Hermite reduction and then compute the transcendental
part two different ways using a combination of exact integration and efficient
numerical computation of roots. The symbolic computation is done within BPAS,
or Basic Polynomial Algebra Subprograms, which is a highly optimized
environment for polynomial computation on parallel architectures, while the
numerical computation is done using the highly optimized multiprecision
rootfinding package MPSolve. We show that both methods are forward and backward
stable in a structured sense and away from singularities tolerance
proportionality is achieved by adjusting the precision of the rootfinding
tasks.Comment: 25 pages, 4 figures; added a footnote and page number
Revisiting Gilbert Strang's "A Chaotic Search for "
In the paper "A Chaotic Search for "~(\cite{strang1991chaotic}), Strang
completely explained the behaviour of Newton's method when using real initial
guesses on , which has only a pair of complex roots . He
explored an exact symbolic formula for the iteration, namely , which is valid in exact arithmetic. In this
paper, we extend this to to order Householder methods, which include
Halley's method, and to the secant method. Two formulae, with
and
, and with , are
provided. The asymptotic behaviour and periodic character are illustrated by
experimental computation. We show that other methods (Schr\"{o}der iterations
of the first kind) are generally not so simple. We also explain an old method
that can be used to allow Maple's \textsl{Fractals[Newton]} package to
visualize general one-step iterations by disguising them as Newton iterations.Comment: 22 pages, 11 figure
- …