8,422 research outputs found

    Data-types optimization for floating-point formats by program transformation

    Get PDF
    In floating-point arithmetic, a desirable property of computations is to be accurate, since in many industrial context small or large perturbations due to round-off errors may cause considerable damages. To cope with this matter of fact, we have developed a tool which corrects these errors by automatically transforming programs in a source to source manner. Our transformation, relying on static analysis by abstract abstraction, concerns pieces of code with assignments, conditionals and loops. By transforming programs, we can significantly optimize the numerical accuracy of computations by minimizing the error relatively to the exact result. An interesting side-effect of our technique is that more accurate computations may make it possible to use smaller data-types. In this article, we show that our transformed programs, executed in single precision, may compete with not transformed codes executed in double precision

    Automatic floating-point to fixed-point conversion for DSP code generation

    Get PDF

    User-defined data types and operators in occam

    Get PDF
    This paper describes the addition of user-defined monadic and dyadic operators to occam* [1], together with some libraries that demonstrate their use. It also discusses some techniques used in their implementation in KRoC [2] for a variety of target machines

    Optimisations arithmétiques et synthÚse de haut niveau

    Get PDF
    High-level synthesis (HLS) tools offer increased productivity regarding FPGA programming.However, due to their relatively young nature, they still lack many arithmetic optimizations.This thesis proposes safe arithmetic optimizations that should always be applied.These optimizations are simple operator specializations, following the C semantic.Other require to a lift the semantic embedded in high-level input program languages, which are inherited from software programming, for an improved accuracy/cost/performance ratio.To demonstrate this claim, the sum-of-product of floating-point numbers is used as a case study. The sum is performed on a fixed-point format, which is tailored to the application, according to the context in which the operator is instantiated.In some cases, there is not enough information about the input data to tailor the fixed-point accumulator.The fall-back strategy used in this thesis is to generate an accumulator covering the entire floating-point range.This thesis explores different strategies for implementing such a large accumulator, including new ones.The use of a 2's complement representation instead of a sign+magnitude is demonstrated to save resources and to reduce the accumulation loop delay.Based on a tapered precision scheme and an exact accumulator, the posit number systems claims to be a candidate to replace the IEEE floating-point format.A throughout analysis of posit operators is performed, using the same level of hardware optimization as state-of-the-art floating-point operators.Their cost remains much higher that their floating-point counterparts in terms of resource usage and performance. Finally, this thesis presents a compatibility layer for HLS tools that allows one code to be deployed on multiple tools.This library implements a strongly typed custom size integer type along side a set of optimized custom operators.À cause de la nature relativement jeune des outils de synthĂšse de haut-niveau (HLS), de nombreuses optimisations arithmĂ©tiques n'y sont pas encore implĂ©mentĂ©es. Cette thĂšse propose des optimisations arithmĂ©tiques se servant du contexte spĂ©cifique dans lequel les opĂ©rateurs sont instanciĂ©s.Certaines optimisations sont de simples spĂ©cialisations d'opĂ©rateurs, respectant la sĂ©mantique du C.D'autres nĂ©cĂ©ssitent de s'Ă©loigner de cette sĂ©mantique pour amĂ©liorer le compromis prĂ©cision/coĂ»t/performance.Cette proposition est dĂ©montrĂ© sur des sommes de produits de nombres flottants.La somme est rĂ©alisĂ©e dans un format en virgule-fixe dĂ©fini par son contexte.Quand trop peu d’informations sont disponibles pour dĂ©finir ce format en virgule-fixe, une stratĂ©gie est de gĂ©nĂ©rer un accumulateur couvrant l'intĂ©gralitĂ© du format flottant.Cette thĂšse explore plusieurs implĂ©mentations d'un tel accumulateur.L'utilisation d'une reprĂ©sentation en complĂ©ment Ă  deux permet de rĂ©duire le chemin critique de la boucle d'accumulation, ainsi que la quantitĂ© de ressources utilisĂ©es. Un format alternatif aux nombres flottants, appelĂ© posit, propose d'utiliser un encodage Ă  prĂ©cision variable.De plus, ce format est augmentĂ© par un accumulateur exact.Pour Ă©valuer prĂ©cisĂ©ment le coĂ»t matĂ©riel de ce format, cette thĂšse prĂ©sente des architectures d'opĂ©rateurs posits, implĂ©mentĂ©s avec le mĂȘme degrĂ© d'optimisation que celui de l'Ă©tat de l'art des opĂ©rateurs flottants.Une analyse dĂ©taillĂ©e montre que le coĂ»t des opĂ©rateurs posits est malgrĂ© tout bien plus Ă©levĂ© que celui de leurs Ă©quivalents flottants.Enfin, cette thĂšse prĂ©sente une couche de compatibilitĂ© entre outils de HLS, permettant de viser plusieurs outils avec un seul code. Cette bibliothĂšque implĂ©mente un type d'entiers de taille variable, avec de plus une sĂ©mantique strictement typĂ©e, ainsi qu'un ensemble d'opĂ©rateurs ad-hoc optimisĂ©s
    • 

    corecore