90 research outputs found

    Contribution au calcul sur GPU: considérations arithmétiques et architecturales

    Get PDF
    L’optimisation du calcul passe par une gestion conjointe du matériel et du logiciel. Cette règle se trouve renforcée lorsque l’on aborde le domaine des architectures multicoeurs où les paramètres à considérer sont plus nombreux que sur une architecture superscalaire classique. Ces architectures offrent une grande variété d’unité de calcul, de format de représentation, de hiérarchie mémoire et de mécanismes de transfert de donnée.Dans ce mémoire, nous décrivons quelques-uns de nos résultats obtenus entre 2004 et 2013 au sein de l'équipe DALI de l'Université de Perpignan relatifs à l'amélioration de l’efficacité du calcul dans sa globalité, c'est-à-dire dans la suite d’opérations décrite au niveau algorithmique et exécutées par les éléments architecturaux, en nous concentrant sur les processeurs graphiques.Nous commençons par une description du fonctionnement de ce type d'architecture, en nous attardant sur le calcul flottant. Nous présentons ensuite des implémentations efficaces d'opérateurs arithmétiques utilisant des représentations non-conventionnelles comme l'arithmétique multiprécision, par intervalle, floue ou logarithmique. Nous continuerons avec nos contributions relatives aux éléments architecturaux associés au calcul à travers la simulation fonctionnelle, les bancs de registres, la gestion des branchements ou les opérateurs matériels spécialisés. Enfin, nous terminerons avec une analyse du comportement du calcul sur les GPU relatif à la régularité, à la consommation électrique, à la fiabilisation des calculs ainsi qu'à laprédictibilité

    An Efficient Representation Format for Fuzzy Intervals Based on Symmetric Membership Functions

    Full text link
    International audienceThis paper proposes a novel implementation of fuzzy arithmetics that exploits both fuzzy intervals and hardware specificities. First, we propose and evaluate the benefit of an alternative representation format to the traditional lower-upper and midpoint-radius representation formats for intervals. Thanks to the proposed formats, we show that it is possible to halve the number of operations and memory requirements compared to conventional methods. Then, we show that operations on fuzzy intervals are sensitive to hardware specificities of accelerators such as GPU. These include static rounding, memory usage, instruction level parallelism (ILP) and thread-level parallelism (TLP). We develop a library of fuzzy arithmetic operations in CUDA and C++ over several formats. The proposed library is evaluated using compute-bound and memory-bound benchmarks on Nvidia GPUs, and shows a performance gain of 2 to 20 over traditional approaches

    Proposal for a Standardization of Mathematical Function Implementation in Floating-Point Arithmetic

    Get PDF
    Some aspects of what a standard for the implementation of the mathematical functions could be are presented. Firstly, the need for such a standard is motivated. Then the proposed standard is given. The question of roundings constitutes an important part of this paper: three levels are proposed, ranging from a level relatively easy to attain (with fixed maximal relative error) up to the best quality one, with correct rounding on the whole domain of every function. We do not claim that we always suggest the right choices, or that we have thought about all relevant issues. The mere goal of this paper is to raise questions and to launch the discussion towards a standard

    Correctly Rounded Exponential Function in Double Precision Arithmetic

    Get PDF
    We present an algorithm for implementing correctly rounded exponentials in double-precision floating point arithmetic. This algorithm is based on floating-point operations in the widespread IEEE-754 standard, and is therefore more efficient than those using multiprecision arithmetic, while being fully portable. It requires a table of reasonable size and IEEE-754 double precision multiplications and additions. In a preliminary implementation, the overhead due to correct rounding is a 2.3 times slowdown when compared to the standard library function

    A Reproducible Accurate Summation Algorithm for High-Performance Computing

    Get PDF
    International audienceFloating-point (FP) addition is non-associative and parallel reduction involving this operation is a serious issue as noted in the DARPA Exascale Report [1]. Such large summations typically appear within fundamental numerical blocks such as dot products or numerical integrations. Hence, the result may vary from one parallel machine to another or even from one run to another. These discrepancies worsen on heterogeneous architectures – such as clusters with GPUs or Intel Xeon Phi processors – which combine programming environments that may obey various floating-point models and offer different intermediate precision or different operators [2,3]. Such non-determinism of floating-point calculations in parallel programs causes validation and debugging issues, and may lead to deadlocks [4]. The increasing power of current computers enables one to solve more and more complex problems. That, consequently, leads to a higher number of floating-point operations to be performed; each of them potentially causing a round-off error. Because of the round-off error propagation, some problems must be solved with a wider floating-point format. Two approaches exist to perform floating-point addition without incurring round-off errors. The first approach aims at computing the error that is occurred during rounding using FP expansions, which are based on an error-free transformation. FP expansions represent the result as an unevaluated sum of a fixed number of FP numbers, whose components are ordered in magnitude with minimal overlap to cover a wide range of exponents. FP expansions of sizes 2 and 4 are presented in [5] and [6], accordingly. The main advantage of this solution is that the expansion can stay in registers during the computations. But, the accuracy is insufficient for the summation of numerous FP numbers or sums with a huge dynamic range. Moreover, their complexity grows linearly with the size of the expansion. An alternative approach to expansions exploits the finite range of representable floating-point numbers by storing every bit in a very long vector of bits (accumulator). The length of the accumulator is chosen such that every bit of information of the input format can be represented; this covers the range from the minimum representable floating-point value to the maximum value, independently of the sign. For instance, Kulisch [7] proposed to utilize an accumulator of 4288 bits to handle the accumulation of products of 64-bit IEEE floating-point values. The Kulisch accumulator is a solution to produce the exact result of a very large amount of floating-point numbers of arbitrary magnitude. However, for a long period this approach was considered impractical as it induces a very large memory overhead. Furthermore, without dedicated hardware support, its performance is limited by indirect memory accesses that make vectorization challenging. We aim at addressing both issues of accuracy and reproducibility in the context of summation. We advocate to compute the correctly-rounded result of the exact sum. Besides offering strict reproducibility through an unambiguous definition of the expected result, our approach guarantees that the result ha

    Reproducible Triangular Solvers for High-Performance Computing

    Get PDF
    On modern parallel architectures, floating-point computations may become non-deterministic and, therefore, non-reproducible mainly due to non-associativity of floating-point operations. We propose an algorithm to solve dense triangular systems by leveraging the standard parallel triangular solver and our, recently introduced, multi-level exact summation approach. Finally, we present implementations of the proposed fast repro-ducible triangular solver and results on recent NVIDIA GPUs
    • …
    corecore