37 research outputs found

    Fonctions élémentaires en virgule flottante pour les accélérateurs reconfigurables

    Get PDF
    National audienceLes circuits reconfigurables FPGA ont désormais une capacité telle qu'ils peuvent être utilisés à des tâches d'accélération de calcul en virgule flottante. La littérature (et depuis peu les constructeurs) proposent des opérateurs pour les quatre opérations. L'étape suivante est de proposer des opérateurs pour les fonctions élémentaires les plus utilisées. Parmi celles-ci, nous proposons des architectures dédiées pour l'évaluation des fonctions exponentielles, logarithme, sinus et cosinus, et étudions les compromis possibles. Pour chacune de ces fonctions, un seul de ces opérateurs surpasse d'un facteur dix les processeurs généralistes en terme de débit, tout en occupant une fraction des ressources matérielles du FPGA. Tous ces opérateurs sont disponibles librement sur http://www.ens-lyon.fr/LIP/Arenaire/

    Algorithms and architectures for decimal transcendental function computation

    Get PDF
    Nowadays, there are many commercial demands for decimal floating-point (DFP) arithmetic operations such as financial analysis, tax calculation, currency conversion, Internet based applications, and e-commerce. This trend gives rise to further development on DFP arithmetic units which can perform accurate computations with exact decimal operands. Due to the significance of DFP arithmetic, the IEEE 754-2008 standard for floating-point arithmetic includes it in its specifications. The basic decimal arithmetic unit, such as decimal adder, subtracter, multiplier, divider or square-root unit, as a main part of a decimal microprocessor, is attracting more and more researchers' attentions. Recently, the decimal-encoded formats and DFP arithmetic units have been implemented in IBM's system z900, POWER6, and z10 microprocessors. Increasing chip densities and transistor count provide more room for designers to add more essential functions on application domains into upcoming microprocessors. Decimal transcendental functions, such as DFP logarithm, antilogarithm, exponential, reciprocal and trigonometric, etc, as useful arithmetic operations in many areas of science and engineering, has been specified as the recommended arithmetic in the IEEE 754-2008 standard. Thus, virtually all the computing systems that are compliant with the IEEE 754-2008 standard could include a DFP mathematical library providing transcendental function computation. Based on the development of basic decimal arithmetic units, more complex DFP transcendental arithmetic will be the next building blocks in microprocessors. In this dissertation, we researched and developed several new decimal algorithms and architectures for the DFP transcendental function computation. These designs are composed of several different methods: 1) the decimal transcendental function computation based on the table-based first-order polynomial approximation method; 2) DFP logarithmic and antilogarithmic converters based on the decimal digit-recurrence algorithm with selection by rounding; 3) a decimal reciprocal unit using the efficient table look-up based on Newton-Raphson iterations; and 4) a first radix-100 division unit based on the non-restoring algorithm with pre-scaling method. Most decimal algorithms and architectures for the DFP transcendental function computation developed in this dissertation have been the first attempt to analyze and implement the DFP transcendental arithmetic in order to achieve faithful results of DFP operands, specified in IEEE 754-2008. To help researchers evaluate the hardware performance of DFP transcendental arithmetic units, the proposed architectures based on the different methods are modeled, verified and synthesized using FPGAs or with CMOS standard cells libraries in ASIC. Some of implementation results are compared with those of the binary radix-16 logarithmic and exponential converters; recent developed high performance decimal CORDIC based architecture; and Intel's DFP transcendental function computation software library. The comparison results show that the proposed architectures have significant speed-up in contrast to the above designs in terms of the latency. The algorithms and architectures developed in this dissertation provide a useful starting point for future hardware-oriented DFP transcendental function computation researches

    Comparison of logarithmic and floating-point number systems implemented on Xilinx Virtex-II field-programmable gate arrays

    Get PDF
    The aim of this thesis is to compare the implementation of parameterisable LNS (logarithmic number system) and floating-point high dynamic range number systems on FPGA. The Virtex/Virtex-II range of FPGAs from Xilinx, which are the most popular FPGA technology, are used to implement the designs. The study focuses on using the low level primitives of the technology in an efficient way and so initially the design issues in implementing fixed-point operators are considered. The four basic operations of addition, multiplication, division and square root are considered. Carry- free adders, ripple-carry adders, parallel multipliers and digit recurrence division and square root are discussed. The floating-point operators use the word format and exceptions as described by the IEEE std-754. A dual-path adder implementation is described in detail, as are floating-point multiplier, divider and square root components. Results and comparisons with other works are given. The efficient implementation of function evaluation methods is considered next. An overview of current FPGA methods is given and a new piecewise polynomial implementation using the Taylor series is presented and compared with other designs in the literature. In the next section the LNS word format, accuracy and exceptions are described and two new LNS addition/subtraction function approximations are described. The algorithms for performing multiplication, division and powering in the LNS domain are also described and are compared with other designs in the open literature. Parameterisable conversion algorithms to convert to/from the fixed-point domain from/to the LNS and floating-point domain are described and implementation results given. In the next chapter MATLAB bit-true software models are given that have the exact functionality as the hardware models. The interfaces of the models are given and a serial communication system to perform low speed system tests is described. A comparison of the LNS and floating-point number systems in terms of area and delay is given. Different functions implemented in LNS and floating-point arithmetic are also compared and conclusions are drawn. The results show that when the LNS is implemented with a 6-bit or less characteristic it is superior to floating-point. However, for larger characteristic lengths the floating-point system is more efficient due to the delay and exponential area increase of the LNS addition operator. The LNS is beneficial for larger characteristics than 6-bits only for specialist applications that require a high portion of division, multiplication, square root, powering operations and few additions

    Contribution au calcul sur GPU: considérations arithmétiques et architecturales

    Get PDF
    L’optimisation du calcul passe par une gestion conjointe du matériel et du logiciel. Cette règle se trouve renforcée lorsque l’on aborde le domaine des architectures multicoeurs où les paramètres à considérer sont plus nombreux que sur une architecture superscalaire classique. Ces architectures offrent une grande variété d’unité de calcul, de format de représentation, de hiérarchie mémoire et de mécanismes de transfert de donnée.Dans ce mémoire, nous décrivons quelques-uns de nos résultats obtenus entre 2004 et 2013 au sein de l'équipe DALI de l'Université de Perpignan relatifs à l'amélioration de l’efficacité du calcul dans sa globalité, c'est-à-dire dans la suite d’opérations décrite au niveau algorithmique et exécutées par les éléments architecturaux, en nous concentrant sur les processeurs graphiques.Nous commençons par une description du fonctionnement de ce type d'architecture, en nous attardant sur le calcul flottant. Nous présentons ensuite des implémentations efficaces d'opérateurs arithmétiques utilisant des représentations non-conventionnelles comme l'arithmétique multiprécision, par intervalle, floue ou logarithmique. Nous continuerons avec nos contributions relatives aux éléments architecturaux associés au calcul à travers la simulation fonctionnelle, les bancs de registres, la gestion des branchements ou les opérateurs matériels spécialisés. Enfin, nous terminerons avec une analyse du comportement du calcul sur les GPU relatif à la régularité, à la consommation électrique, à la fiabilisation des calculs ainsi qu'à laprédictibilité

    From detection to optimization: impact of soft errors on high-performance computing applications

    Get PDF
    As high-performance computing (HPC) continues to progress, constraints on HPC system design forces the handling of errors to higher levels in the software stack. Of the types of errors facing HPC, soft errors that silently corrupt system or application state are among the most severe. The behavior of HPC applications in the presence of soft errors is critical to gain insight for effective utilization of HPC systems. The need to understand this behavior can be used in developing algorithm-based error detection guided by application characteristics from fault injection and error propagation studies. Furthermore, the realization that applications are tolerant to small errors allows optimizations such as lossy compression on high-cost data transfers. Lossy compression adds small user controllable amounts of error when compressing data, to reduce data size before expensive data transfers saving time. This dissertation investigates and improves the resiliency of HPC applications to soft errors, and explores lossy compression as a new form of optimization for expensive, time-consuming data transfers

    Implementation and Applications of Logarithmic Signal Processing on an FPGA

    Get PDF
    This thesis presents two novel algorithms for converting a normalised binary floating point number into a binary logarithmic number with the single-precision of a floating point number. The thesis highlights the importance of logarithmic number systems in real-time DSP applications. A real-time cross-correlation application where logarithmic signal processing is used to simplify the complex computation is presented. The first algorithm presented in this thesis comprises two stages. A piecewise linear approximation to the original logarithmic curve is performed in the first stage and a scaled-down normalised error curve is stored in the second stage. The algorithm requires less than 20 kbits of ROM and a maximum of three small multipliers. The architecture is implemented on Xilinx's Spartan3 and Spartan6 FPGA family. Synthesis results confirm that the algorithm operates at a frequency of 42.3 MHz on a Spartan3 device and 127.8 MHz on a Spartan6. Both solutions have a pipeline latency of two clocks. The operating speed increases to 71.4 MHz and 160 MHz respectively when the pipeline latencies increase to eight clocks. The proposed algorithm is further improved by using a PWL (Piece-Wise Linear) approximation of the transform curve combined with a PWL approximation of a scaled version of the normalized segment error. A hardware approach for reducing the memory with additional XOR gates in the second stage is also presented. The architecture presented uses just one 18k bit Block RAM (BRAM) and synthesis results indicate operating frequencies of 93 and 110 MHz when implemented on the Xilinx Spartan3 and Spartan6 devices respectively. Finally a novel prototype of an FPGA-based four channel correlation velocimetry system is presented. The system operates at a higher sampling frquency than previous published work and outputs the new result after every new sample it receives. The system works at a sampling frequency of 195.31 kHz and a sample resolution of 12 bits. The prototype system calculates a delay in a range of 0 to 2.6 ms with a resolution of 5.12 us

    Proceedings of the 7th Conference on Real Numbers and Computers (RNC'7)

    Get PDF
    These are the proceedings of RNC7

    Approche arithmétique RNS de la cryptographie asymétrique

    Get PDF
    This thesis is at the crossroads between cryptography and computer arithmetic. It deals with enhancement of cryptographic primitives with regard to computation acceleration and protection against fault injections through the use of residue number systems (RNS) and their associated arithmetic. So as to contribute to secure the modular multiplication, which is a core operation for many asymmetric cryptographic primitives, a new modular reduction algorithm supplied with fault detection capability is presented. A formal proof guarantees that faults affecting one or more residues during a modular reduction are well detected. Furthermore, this approach is generalized to an arithmetic dedicated to non-prime finite fields Fps . Afterwards, RNS are used in lattice-based cryptography area. The aim is to exploit acceleration properties enabled by RNS, as it is widely done for finite field arithmetic. As first result, a new version of Babai’s round-off algorithm based on hybrid RNS-MRS representation is presented. Then, a new and specific acceleration technique enables to create a full RNS algorithm computing a close lattice vector.Cette thèse se situe à l'intersection de la cryptographie et de l'arithmétique des ordinateurs. Elle traite de l'amélioration de primitives cryptographiques asymétriques en termes d'accélération des calculs et de protection face aux attaques par fautes par le biais particulier de l'utilisation des systèmes de représentation des nombres par les restes (RNS). Afin de contribuer à la sécurisation de la multiplication modulaire, opération centrale en cryptographie asymétrique, un nouvel algorithme de réduction modulaire doté d'une capacité de détection de faute est présenté. Une preuve formelle garantit la détection des fautes sur un ou plusieurs résidus pouvant apparaître au cours d'une réduction. De plus, le principe de cet algorithme est généralisé au cas d'une arithmétique dans un corps fini non premier. Ensuite, les RNS sont exploités dans le domaine de la cryptographie sur les réseaux euclidiens. L'objectif est d'importer dans ce domaine certains avantages des systèmes de représentation par les restes dont l'intérêt a déjà été montré pour une arithmétique sur GF(p) notamment. Le premier résultat obtenu est une version en représentation hybride RNS-MRS de l'algorithme du « round-off » de Babai. Puis une technique d'accélération est introduite, permettant d'aboutir dans certains cas à un algorithme entièrement RNS pour le calcul d'un vecteur proche

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Codage de cartes de profondeur par deformation de courbes elastiques

    Get PDF
    In multiple-view video plus depth, depth maps can be represented by means of grayscale images and the corresponding temporal sequence can be thought as a standard grayscale video sequence. However depth maps have different properties from natural images: they present large areas of smooth surfaces separated by sharp edges. Arguably the most important information lies in object contours, as a consequence an interesting approach consists in performing a lossless coding of the contour map, possibly followed by a lossy coding of per-object depth values.In this context, we propose a new technique for the lossless coding of object contours, based on the elastic deformation of curves. A continuous evolution of elastic deformations between two reference contour curves can be modelled, and an elastically deformed version of the reference contours can be sent to the decoder with an extremely small coding cost and used as side information to improve the lossless coding of the actual contour. After the main discontinuities have been captured by the contour description, the depth field inside each region is rather smooth. We proposed and tested two different techniques for the coding of the depth field inside each region. The first technique performs the shape-adaptive wavelet transform followed by the shape-adaptive version of SPIHT. The second technique performs a prediction of the depth field from its subsampled version and the set of coded contours. It is generally recognized that a high quality view rendering at the receiver side is possible only by preserving the contour information, since distortions on edges during the encoding step would cause a sensible degradation on the synthesized view and on the 3D perception. We investigated this claim by conducting a subjective quality assessment test to compare an object-based technique and a hybrid block-based techniques for the coding of depth maps.Dans le format multiple-view video plus depth, les cartes de profondeur peuvent être représentées comme des images en niveaux de gris et la séquence temporelle correspondante peut être considérée comme une séquence vidéo standard en niveaux de gris. Cependant les cartes de profondeur ont des propriétés différentes des images naturelles: ils présentent de grandes surfaces lisses séparées par des arêtes vives. On peut dire que l'information la plus importante réside dans les contours de l'objet, en conséquence une approche intéressante consiste à effectuer un codage sans perte de la carte de contour, éventuellement suivie d'un codage lossy des valeurs de profondeur par-objet.Dans ce contexte, nous proposons une nouvelle technique pour le codage sans perte des contours de l'objet, basée sur la déformation élastique des courbes. Une évolution continue des déformations élastiques peut être modélisée entre deux courbes de référence, et une version du contour déformée élastiquement peut être envoyé au décodeur avec un coût de codage très faible et utilisé comme information latérale pour améliorer le codage sans perte du contour réel. Après que les principales discontinuités ont été capturés par la description du contour, la profondeur à l'intérieur de chaque région est assez lisse. Nous avons proposé et testé deux techniques différentes pour le codage du champ de profondeur à l'intérieur de chaque région. La première technique utilise la version adaptative à la forme de la transformation en ondelette, suivie par la version adaptative à la forme de SPIHT.La seconde technique effectue une prédiction du champ de profondeur à partir de sa version sous-échantillonnée et l'ensemble des contours codés. Il est généralement reconnu qu'un rendu de haute qualité au récepteur pour un nouveau point de vue est possible que avec la préservation de l'information de contour, car des distorsions sur les bords lors de l'étape de codage entraînerait une dégradation évidente sur la vue synthétisée et sur la perception 3D. Nous avons étudié cette affirmation en effectuant un test d'évaluation de la qualité perçue en comparant, pour le codage des cartes de profondeur, une technique basée sur la compression d'objects et une techniques de codage vidéo hybride à blocs
    corecore