310 research outputs found
Analysis of Nonlinear Systems via Bernstein Expansions
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/106482/1/AIAA2013-4557.pd
Precision analysis for hardware acceleration of numerical algorithms
The precision used in an algorithm affects the error and performance of individual computations, the
memory usage, and the potential parallelism for a fixed hardware budget. However, when migrating
an algorithm onto hardware, the potential improvements that can be obtained by tuning the precision
throughout an algorithm to meet a range or error specification are often overlooked; the major reason
is that it is hard to choose a number system which can guarantee any such specification can be met.
Instead, the problem is mitigated by opting to use IEEE standard double precision arithmetic so as to be
‘no worse’ than a software implementation. However, the flexibility in the number representation is one
of the key factors that can be exploited on reconfigurable hardware such as FPGAs, and hence ignoring
this potential significantly limits the performance achievable.
In order to optimise the performance of hardware reliably, we require a method that can tractably
calculate tight bounds for the error or range of any variable within an algorithm, but currently only a
handful of methods to calculate such bounds exist, and these either sacrifice tightness or tractability,
whilst simulation-based methods cannot guarantee the given error estimate. This thesis presents a new
method to calculate these bounds, taking into account both input ranges and finite precision effects,
which we show to be, in general, tighter in comparison to existing methods; this in turn can be used to
tune the hardware to the algorithm specifications.
We demonstrate the use of this software to optimise hardware for various algorithms to accelerate
the solution of a system of linear equations, which forms the basis of many problems in engineering
and science, and show that significant performance gains can be obtained by using this new approach in
conjunction with more traditional hardware optimisations
Localization theorems for nonlinear eigenvalue problems
Let T : \Omega \rightarrow \bbC^{n \times n} be a matrix-valued function
that is analytic on some simply-connected domain \Omega \subset \bbC. A point
is an eigenvalue if the matrix is singular.
In this paper, we describe new localization results for nonlinear eigenvalue
problems that generalize Gershgorin's theorem, pseudospectral inclusion
theorems, and the Bauer-Fike theorem. We use our results to analyze three
nonlinear eigenvalue problems: an example from delay differential equations, a
problem due to Hadeler, and a quantum resonance computation.Comment: Submitted to SIMAX. 22 pages, 11 figure
Multiplicity Estimates: a Morse-theoretic approach
The problem of estimating the multiplicity of the zero of a polynomial when
restricted to the trajectory of a non-singular polynomial vector field, at one
or several points, has been considered by authors in several different fields.
The two best (incomparable) estimates are due to Gabrielov and Nesterenko.
In this paper we present a refinement of Gabrielov's method which
simultaneously improves these two estimates. Moreover, we give a geometric
description of the multiplicity function in terms certain naturally associated
polar varieties, giving a topological explanation for an asymptotic phenomenon
that was previously obtained by elimination theoretic methods in the works of
Brownawell, Masser and Nesterenko. We also give estimates in terms of Newton
polytopes, strongly generalizing the classical estimates.Comment: Minor revision; To appear in Duke Math. Journa
Rational Krylov for Stieltjes matrix functions: convergence and pole selection
Evaluating the action of a matrix function on a vector, that is , is an ubiquitous task in applications. When is large, one
usually relies on Krylov projection methods. In this paper, we provide
effective choices for the poles of the rational Krylov method for approximating
when is either Cauchy-Stieltjes or Laplace-Stieltjes (or, which is
equivalent, completely monotonic) and is a positive definite
matrix. Relying on the same tools used to analyze the generic situation, we
then focus on the case , and
obtained vectorizing a low-rank matrix; this finds application, for instance,
in solving fractional diffusion equation on two-dimensional tensor grids. We
see how to leverage tensorized Krylov subspaces to exploit the Kronecker
structure and we introduce an error analysis for the numerical approximation of
. Pole selection strategies with explicit convergence bounds are given also
in this case
Computing hypergeometric functions rigorously
We present an efficient implementation of hypergeometric functions in
arbitrary-precision interval arithmetic. The functions , ,
and (or the Kummer -function) are supported for
unrestricted complex parameters and argument, and by extension, we cover
exponential and trigonometric integrals, error functions, Fresnel integrals,
incomplete gamma and beta functions, Bessel functions, Airy functions, Legendre
functions, Jacobi polynomials, complete elliptic integrals, and other special
functions. The output can be used directly for interval computations or to
generate provably correct floating-point approximations in any format.
Performance is competitive with earlier arbitrary-precision software, and
sometimes orders of magnitude faster. We also partially cover the generalized
hypergeometric function and computation of high-order parameter
derivatives.Comment: v2: corrected example in section 3.1; corrected timing data for case
E-G in section 8.5 (table 6, figure 2); adjusted paper siz
- …