32,513 research outputs found
SWATI: Synthesizing Wordlengths Automatically Using Testing and Induction
In this paper, we present an automated technique SWATI: Synthesizing
Wordlengths Automatically Using Testing and Induction, which uses a combination
of Nelder-Mead optimization based testing, and induction from examples to
automatically synthesize optimal fixedpoint implementation of numerical
routines. The design of numerical software is commonly done using
floating-point arithmetic in design-environments such as Matlab. However, these
designs are often implemented using fixed-point arithmetic for speed and
efficiency reasons especially in embedded systems. The fixed-point
implementation reduces implementation cost, provides better performance, and
reduces power consumption. The conversion from floating-point designs to
fixed-point code is subject to two opposing constraints: (i) the word-width of
fixed-point types must be minimized, and (ii) the outputs of the fixed-point
program must be accurate. In this paper, we propose a new solution to this
problem. Our technique takes the floating-point program, specified accuracy and
an implementation cost model and provides the fixed-point program with
specified accuracy and optimal implementation cost. We demonstrate the
effectiveness of our approach on a set of examples from the domain of automated
control, robotics and digital signal processing
Computing hypergeometric functions rigorously
We present an efficient implementation of hypergeometric functions in
arbitrary-precision interval arithmetic. The functions , ,
and (or the Kummer -function) are supported for
unrestricted complex parameters and argument, and by extension, we cover
exponential and trigonometric integrals, error functions, Fresnel integrals,
incomplete gamma and beta functions, Bessel functions, Airy functions, Legendre
functions, Jacobi polynomials, complete elliptic integrals, and other special
functions. The output can be used directly for interval computations or to
generate provably correct floating-point approximations in any format.
Performance is competitive with earlier arbitrary-precision software, and
sometimes orders of magnitude faster. We also partially cover the generalized
hypergeometric function and computation of high-order parameter
derivatives.Comment: v2: corrected example in section 3.1; corrected timing data for case
E-G in section 8.5 (table 6, figure 2); adjusted paper siz
Accurate and Efficient Expression Evaluation and Linear Algebra
We survey and unify recent results on the existence of accurate algorithms
for evaluating multivariate polynomials, and more generally for accurate
numerical linear algebra with structured matrices. By "accurate" we mean that
the computed answer has relative error less than 1, i.e., has some correct
leading digits. We also address efficiency, by which we mean algorithms that
run in polynomial time in the size of the input. Our results will depend
strongly on the model of arithmetic: Most of our results will use the so-called
Traditional Model (TM). We give a set of necessary and sufficient conditions to
decide whether a high accuracy algorithm exists in the TM, and describe
progress toward a decision procedure that will take any problem and provide
either a high accuracy algorithm or a proof that none exists. When no accurate
algorithm exists in the TM, it is natural to extend the set of available
accurate operations by a library of additional operations, such as , dot
products, or indeed any enumerable set which could then be used to build
further accurate algorithms. We show how our accurate algorithms and decision
procedure for finding them extend to this case. Finally, we address other
models of arithmetic, and the relationship between (im)possibility in the TM
and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl
Cartographic Algorithms: Problems of Implementation and Evaluation and the Impact of Digitising Errors
Cartographic generalisation remains one of the outstanding challenges in digital cartography and Geographical Information Systems (GIS). It is generally assumed that computerisation will lead to the removal of spurious variability introduced by the subjective decisions of individual cartographers. This paper demonstrates through an in‐depth study of a line simplification algorithm that computerisation introduces its own sources of variability. The algorithm, referred to as the Douglas‐Peucker algorithm in cartographic literature, has been widely used in image processing, pattern recognition and GIS for some 20 years. An analysis of this algorithm and study of some implementations in wide use identify the presence of variability resulting from the subjective decisions of software implementors. Spurious variability in software complicates the processes of evaluation and comparison of alternative algorithms for cartographic tasks. No doubt, variability in implementation could be removed by rigorous study and specification of algorithms. Such future work must address the presence of digitising error in cartographic data. Our analysis suggests that it would be difficult to adapt the Douglas‐Peucker algorithm to cope with digitising error without altering the method. Copyright © 1991, Wiley Blackwell. All rights reserve
- …