2,451 research outputs found

    Computation of the inverse Laplace Transform based on a Collocation method which uses only real values

    Get PDF
    We develop a numerical algorithm for inverting a Laplace transform (LT), based on Laguerre polynomial series expansion of the inverse function under the assumption that the LT is known on the real axis only. The method belongs to the class of Collocation methods (C-methods), and is applicable when the LT function is regular at infinity. Difficulties associated with these problems are due to their intrinsic ill-posedness. The main contribution of this paper is to provide computable estimates of truncation, discretization, conditioning and roundoff errors introduced by numerical computations. Moreover, we introduce the pseudoaccuracy which will be used by the numerical algorithm in order to provide uniform scaled accuracy of the computed approximation for any x with respect to ex . These estimates are then employed to dynamically truncate the series expansion. In other words, the number of the terms of the series acts like the regularization parameter which provides the trade-off between errors. With the aim to validate the reliability and usability of the algorithm experiments were carried out on several test functions

    A Verified Certificate Checker for Finite-Precision Error Bounds in Coq and HOL4

    Full text link
    Being able to soundly estimate roundoff errors of finite-precision computations is important for many applications in embedded systems and scientific computing. Due to the discrepancy between continuous reals and discrete finite-precision values, automated static analysis tools are highly valuable to estimate roundoff errors. The results, however, are only as correct as the implementations of the static analysis tools. This paper presents a formally verified and modular tool which fully automatically checks the correctness of finite-precision roundoff error bounds encoded in a certificate. We present implementations of certificate generation and checking for both Coq and HOL4 and evaluate it on a number of examples from the literature. The experiments use both in-logic evaluation of Coq and HOL4, and execution of extracted code outside of the logics: we benchmark Coq extracted unverified OCaml code and a CakeML-generated verified binary

    Certified Roundoff Error Bounds using Bernstein Expansions and Sparse Krivine-Stengle Representations

    Full text link
    Floating point error is an inevitable drawback of embedded systems implementation. Computing rigorous upper bounds of roundoff errors is absolutely necessary to the validation of critical software. This problem is even more challenging when addressing non-linear programs. In this paper, we propose and compare two new methods based on Bernstein expansions and sparse Krivine-Stengle representations, adapted from the field of the global optimization to compute upper bounds of roundoff errors for programs implementing polynomial functions. We release two related software package FPBern and FPKiSten, and compare them with state of the art tools. We show that these two methods achieve competitive performance, while computing accurate upper bounds by comparison with other tools.Comment: 20 pages, 2 table

    Certified Roundoff Error Bounds Using Semidefinite Programming.

    Get PDF
    Roundoff errors cannot be avoided when implementing numerical programs with finite precision. The ability to reason about rounding is especially important if one wants to explore a range of potential representations, for instance for FPGAs or custom hardware implementation. This problem becomes challenging when the program does not employ solely linear operations as non-linearities are inherent to many interesting computational problems in real-world applications. Existing solutions to reasoning are limited in the presence of nonlinear correlations between variables, leading to either imprecise bounds or high analysis time. Furthermore, while it is easy to implement a straightforward method such as interval arithmetic, sophisticated techniques are less straightforward to implement in a formal setting. Thus there is a need for methods which output certificates that can be formally validated inside a proof assistant. We present a framework to provide upper bounds on absolute roundoff errors. This framework is based on optimization techniques employing semidefinite programming and sums of squares certificates, which can be formally checked inside the Coq theorem prover. Our tool covers a wide range of nonlinear programs, including polynomials and transcendental operations as well as conditional statements. We illustrate the efficiency and precision of this tool on non-trivial programs coming from biology, optimization and space control. Our tool produces more precise error bounds for 37 percent of all programs and yields better performance in 73 percent of all programs

    On Sound Relative Error Bounds for Floating-Point Arithmetic

    Full text link
    State-of-the-art static analysis tools for verifying finite-precision code compute worst-case absolute error bounds on numerical errors. These are, however, often not a good estimate of accuracy as they do not take into account the magnitude of the computed values. Relative errors, which compute errors relative to the value's magnitude, are thus preferable. While today's tools do report relative error bounds, these are merely computed via absolute errors and thus not necessarily tight or more informative. Furthermore, whenever the computed value is close to zero on part of the domain, the tools do not report any relative error estimate at all. Surprisingly, the quality of relative error bounds computed by today's tools has not been systematically studied or reported to date. In this paper, we investigate how state-of-the-art static techniques for computing sound absolute error bounds can be used, extended and combined for the computation of relative errors. Our experiments on a standard benchmark set show that computing relative errors directly, as opposed to via absolute errors, is often beneficial and can provide error estimates up to six orders of magnitude tighter, i.e. more accurate. We also show that interval subdivision, another commonly used technique to reduce over-approximations, has less benefit when computing relative errors directly, but it can help to alleviate the effects of the inherent issue of relative error estimates close to zero

    Accuracy and speed in computing the Chebyshev collocation derivative

    Get PDF
    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method

    Optimal design of linear phase FIR digital filters with very flat passbands and equiripple stopbands

    Get PDF
    A new technique is presented for the design of digital FIR filters, with a prescribed degree of flatness in the passband, and a prescribed (equiripple) attenuation in the stopband. The design is based entirely on an appropriate use of the well-known Reméz-exchange algorithm for the design of weighted Chebyshev FIR filters. The extreme versatility of this algorithm is combined with certain "maximally flat" FIR filter building blocks, in order to generate a wide family of filters. The design technique directly leads to structures that have low passband sensitivity properties
    • 

    corecore