17 research outputs found

    Numerical Verification of Industrial Numerical Codes

    Full text link
    Several approximations occur during a numerical simulation: physical effects mapy be discarded, continuous functions replaced by discretized ones and real numbers replaced by finite-precision representations. The use of the floating point arithmetic generates round-off errors at each arithmetical expression and some mathematical properties are lost. The aim of the numerical verification activity at EDF R&D is to study the effect of the round-off error propagation on the results of a numerical simulation. It is indeed crucial to perform a numerical verification of industrial codes such as developped at EDF R&D even more for code running in HPC environments. This paper presents some recent studies around the numerical verification at EDF R&D. Le résultat d’un code de simulation numérique subit plusieurs approximations effectuées lors de la modélisation mathématique du problème physique, de la discrétisation du modèle mathématique et de la résolution numérique en arithmétique flottante. L’utilisation de l’arithmétique flottante génère en effet des erreurs d’arrondi lors de chaque opération flottante et des propriétés mathématiques sont perdues. Il existe à EDF R&D une activité transverse de vérification numérique consistant à étudier l’effet de la propagation des erreurs d’arrondi sur les résultats des simulations. Il est en effet important de vérifier numériquement des codes industriels et ce d’autant plus s’ils sont éxécutés dans environnements de calcul haute performance. Ce papier présente des études récentes autour de la vérification numérique à EDF R&D

    First steps towards more numerical reproducibility

    Get PDF
    International audienceQuestions whether numerical simulation is reproducible or not have been reported in several sensitive applications. Numerical reproducibility failure mainly comes from the finite precision of computer arithmetic. Results of floating-point computation depends on the computer arithmetic precision and on the order of arithmetic operations. Massive parallel HPC which merges, for instance, many-core CPU and GPU, clearly modifies these two parameters even from run to run on a given computing platform. How to trust such computed results? This paper presents how three classic approaches in computer arithmetic may provide some first steps towards more numerical reproducibility

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    Reliable Data Processing Enabled By Program Analysis

    Get PDF
    Errors pose a serious threat to the output validity of modern data processing, which is often performed by computer programs. In scientific computation, data are collected through instruments or sensors that may be exposed to rough environmental conditions, leading to errors. Furthermore, during the computation process data may not be precisely represented due to the limited precision of the underlying machine, leading to representation errors. Computational processing of these data may hence produce unreliable output results or even faulty conclusions. We call them reliability problems. ^ We consider the reliability problems that are caused by two kinds of errors. The first kind of errors includes input and parameter errors, which originate from the external physical environment. We call these external errors. The other kind of errors is due to the limited representation of floating-point values. They occur when values cannot be precisely represented by machines. We call them internal representation errors, or internal errors. They are usually at a much smaller scale compared to external errors. Nonetheless, such tiny errors may still lead to unreliable results and serious problems. ^ In this dissertation, we develop program analysis techniques to enable reliable data processing. For external errors, we propose techniques to improve the sampling efficiency of Monte Carlo methods, namely execution coalescing and white-box sampling. For internal errors, we develop efficient monitoring techniques to detect instability problems at runtime in floating point program executions

    Programming with Numerical Uncertainties

    Get PDF
    Numerical software, common in scientific computing or embedded systems, inevitably uses an approximation of the real arithmetic in which most algorithms are designed. In many domains, roundoff errors are not the only source of inaccuracy and measurement as well as truncation errors further increase the uncertainty of the computed results. Adequate tools are needed to help users select suitable approximations (data types and algorithms) which satisfy their accuracy requirements, especially for safety- critical applications. Determining that a computation produces accurate results is challenging. Roundoff errors and error propagation depend on the ranges of variables in complex and non-obvious ways; even determining these ranges accurately for nonlinear programs poses a significant challenge. In numerical loops, roundoff errors grow, in general, unboundedly. Finally, due to numerical errors, the control flow in the finite-precision implementation may diverge from the ideal real-valued one by taking a different branch and produce a result that is far-off of the expected one. In this thesis, we present techniques and tools for automated and sound analysis, verification and synthesis of numerical programs. We focus on numerical errors due to roundoff from floating-point and fixed-point arithmetic, external input uncertainties or truncation errors. Our work uses interval or affine arithmetic together with Satisfiability Modulo Theories (SMT) technology as well as analytical properties of the underlying mathematical problems. This combination of techniques enables us to compute sound and yet accurate error bounds for nonlinear computations, determine closed-form symbolic invariants for unbounded loops and quantify the effects of discontinuities on numerical errors. We can furthermore certify the results of self-correcting iterative algorithms. Accuracy usually comes at the expense of resource efficiency: more precise data types need more time, space and energy. We propose a programming model where the scientist writes his or her numerical program in a real-valued specification language with explicit error annotations. It is then the task of our verifying compiler to select a suitable floating-point or fixed-point data type which guarantees the needed accuracy. Sometimes accuracy can be gained by simply re-arranging the non-associative finite-precision computation. We present a scalable technique that searches for a more optimal evaluation order and show that the gains can be substantial. We have implemented all our techniques and evaluated them on a number of benchmarks from scientific computing and embedded systems, with promising results

    Towards a Compiler for Reals

    Get PDF
    Numerical software, common in scientific computing or embedded systems, inevitably uses a finite-precision approximation of the real arithmetic in which most algorithms are designed. In many applications, the roundoff errors introduced by finite-precision arithmetic are not the only source of inaccuracy, and measurement and other input errors further increase the uncertainty of the computed results. Adequate tools are needed to help users select suitable data types and evaluate the provided accuracy, especially for safety-critical applications. We present a source-to-source compiler called Rosa that takes as input a real-valued program with error specifications and synthesizes code over an appropriate floating-point or fixed-point data type. The main challenge of such a compiler is a fully automated, sound, and yet accurate-enough numerical error estimation. We introduce a unified technique for bounding roundoff errors from floating-point and fixed-point arithmetic of various precisions. The technique can handle nonlinear arithmetic, determine closed-form symbolic invariants for unbounded loops, and quantify the effects of discontinuities on numerical errors. We evaluate Rosa on a number of benchmarks from scientific computing and embedded systems and, comparing it to the state of the art in automated error estimation, show that it presents an interesting tradeoff between accuracy and performance
    corecore