7 research outputs found

    Expectiles for subordinated Gaussian processes with applications

    Full text link
    In this paper, we introduce a new class of estimators of the Hurst exponent of the fractional Brownian motion (fBm) process. These estimators are based on sample expectiles of discrete variations of a sample path of the fBm process. In order to derive the statistical properties of the proposed estimators, we establish asymptotic results for sample expectiles of subordinated stationary Gaussian processes with unit variance and correlation function satisfying ρ(i)∌Îș∣iâˆŁâˆ’Î±\rho(i)\sim \kappa|i|^{-\alpha} (\kappa\in \RR) with α>0\alpha>0. Via a simulation study, we demonstrate the relevance of the expectile-based estimation method and show that the suggested estimators are more robust to data rounding than their sample quantile-based counterparts

    Dépliage de Boucles Versus Précision Numérique

    Get PDF
    Les calculs en nombres flottants sont intensivement utilisĂ©s dans divers domaines, notamment les systĂšmes embarquĂ©s critiques. En gĂ©nĂ©ral, les rĂ©sultats de ces calculs sont perturbĂ©s par les erreurs d’arrondi. Dans un scenario critique, ces erreurs peuvent ĂȘtre accumulĂ©es et propagĂ©es, gĂ©nĂ©rant ainsi des dommages plus ou moins graves sur le plan humain, matĂ©riel, financier, etc. Il est donc souhaitable d’obtenir les rĂ©sultats les plus prĂ©cis possibles lorsque nous utilisons l’arithmĂ©tique flottante. Pour remĂ©dier Ă  ce problĂšme, l’outil Salsa [7] permet d’amĂ©liorer la prĂ©cision des calculs en corrigeant partiellement ces erreurs d’arrondi par une transformation automatique et source Ă  source des programmes. La principale contribution de ce travail consiste Ă  analyser, Ă  Ă©tudier si l’optimisation par dĂ©pliage de boucles amĂ©liore plus la prĂ©cision numĂ©rique des calculs dans le programme initial. À cours terme, on souhaite dĂ©finir un facteur de dĂ©pliage de boucles, c’est Ă  dire, trouver quand est-ce qu’il est pertinent de dĂ©plier la boucle dans le programme

    Improving the Numerical Accuracy of High Performance Computing Programs by Process Specialization

    Get PDF
    In high performance computing, nearly all the implementations and published experiments use floating-point arithmetic. However, since floating-point numbers are finite approximations of real numbers, it may result in hazards because of the accumulated errors. These round-off errors may cause damages whose gravity varies depending on the critical level of the application. To deal with this issue, we have developed a tool which improves the numerical accuracy of computations by automatically transforming programs in a source-to-source manner. Our transformation, relies on static analysis by abstract interpretation and operates on pieces of code with assignments, conditionals, loops, functions and arrays. In this article, we apply our techniques to optimize a parallel program representative of the high performance computing domain. Parallelism introduces new numerical accuracy problems due to the order of operations in this kind of systems. We are also interested in studying the compromise between execution time and numerical accuracy

    A New Abstract Domain for the Representation of Mathematically Equivalent Expressions

    Full text link
    International audienceExact computations being in general not tractable for computers, they are approximated by floating-point computations. This is the source of many errors in numerical programs. Because the floating-point arithmetic is not intuitive, these errors are very di cult to detect and to correct by hand and we consider the problem of automatically synthesizing accurate formulas.We consider that a program would return an exact result if the computations were carried out using real numbers. In practice, roundo errors arise during the execution and these errors are closely related to the way formulas are written. Our approach is based on abstract interpretation. We introduce Abstract Program Equivalence Graphs (APEGs) to represent in polynomial size an exponential number of mathematically equivalent expressions. The concretization of an APEG yields expressions of very di erent shapes and accuracies. Then, we extract optimized expressions from APEGs by searching the most accurate concrete expressions among the set of represented expressions

    An Abstract Interpretation Framework for the Round-Off Error Analysis of Floating-Point Programs

    Get PDF
    This paper presents an abstract interpretation framework for the round-off error analysis of floating-point programs. This framework defines a parametric abstract analysis that computes, for each combination of ideal and floating-point execution path of the program, a sound over-approximation of the accumulated floating-point round-off error that may occur. In addition, a Boolean expression that characterizes the input values leading to the computed error approximation is also computed. An abstraction on the control flow of the program is proposed to mitigate the explosion of the number of elements generated by the analysis. Additionally, a widening operator is defined to ensure the convergence of recursive functions and loops. An instantiation of this framework is implemented in the prototype tool PRECiSA that generates formal proof certificates stating the correctness of the computed round-off errors

    Numerical program optimisation by automatic improvement of the accuracy of computations

    Get PDF
    Over the last decade, guaranteeing the accuracy of computations relying on the IEEE754 floating-point arithmetic has become increasingly complex. Failures, caused by small or large perturbations due to round-off errors, have been registered. To cope with this issue, we have developed a tool which corrects these errors by automatically transforming programs in a source to source manner. Our transformation, relying on static analysis by abstract abstraction, operates on pieces of code with assignments, conditionals and loops. By transforming programs, we can significantly optimize the numerical accuracy of computations by minimizing the error relatively to the exact result. In this article, we present two important desirable side-effects of our transformation. Firstly, we show that our transformed programs, executed in single precision, may compete with not transformed codes executed in double precision. Secondly, we show that optimizing the numerical accuracy of programs accelerates the convergence of numerical iterative methods. Both of these properties of our transformation are of great interest for numerical software
    corecore