53 research outputs found

    Approximation and Compression Techniques to Enhance Performance of Graphics Processing Units

    Get PDF
    A key challenge in modern computing systems is to access data fast enough to fully utilize the computing elements in the chip. In Graphics Processing Units (GPUs), the performance is often constrained by register file size, memory bandwidth, and the capacity of the main memory. One important technique towards alleviating this challenge is data compression. By reducing the amount of data that needs to be communicated or stored, memory resources crucial for performance can be efficiently utilized.This thesis provides a set of approximation and compression techniques for GPUs, with the goal of efficiently utilizing the computational fabric, and thereby increase performance. The thesis shows that these techniques can substantially lower the amount of information the system has to process, and are thus important tools in the process of meeting challenges in memory utilization.This thesis makes contributions within three areas: controlled floating-point precision reduction, lossless and lossy memory compression, and distributed training of neural networks. In the first area, the thesis shows that through automated and controlled floating-point approximation, the register file can be more efficiently utilized. This is achieved through a framework which establishes a cross-layer connection between the application and the microarchitecture layer, and a novel register file organization capable of leveraging low-precision floating-point values and narrow integers for increased capacity and performance.Within the area of compression, this thesis aims at increasing the effective bandwidth of GPUs by presenting a lossless and lossy memory compression algorithm to reduce the amount of transferred data. In contrast to state-of-the-art compression techniques such as Base-Delta-Immediate and Bitplane Compression, which uses intra-block bases for compression, the proposed algorithm leverages multiple global base values to reach a higher compression ratio. The algorithm includes an optional approximation step for floating-point values which offers higher compression ratio at a given, low, error rate.Finally, within the area of distributed training of neural networks, this thesis proposes a subgraph approximation scheme for graph data which mitigates accuracy loss in a distributed setting. The scheme allows neural network models that use graphs as inputs to converge at single-machine accuracy, while minimizing synchronization overhead between the machines

    Real Time 3-D Graphics Processing Hardware Design using Field-Programmable Gate Arrays.

    Get PDF
    Three dimensional graphics processing requires many complex algebraic and matrix based operations to be performed in real-time. In early stages of graphics processing, such tasks were delegated to a Central Processing Unit (CPU). Over time as more complex graphics rendering was demanded, CPU solutions became inadequate. To meet this demand, custom hardware solutions that take advantage of pipelining and massive parallelism become more preferable to CPU software based solutions. This fact has lead to the many custom hardware solutions that are available today. Since real time graphics processing requires extreme high performance, hardware solutions using Application Specific Integrated Circuits (ASICs) are the standard within the industry. While ASICs are a more than adequate solution for implementing high performance custom hardware, the design, implementation and testing of ASIC based designs are becoming cost prohibitive due to the massive up front verification effort needed as well as the cost of fixing design defects.Field Programmable Gate Arrays (FPGAs) provide an alternative to the ASIC design flow. More importantly, in recent years FPGA technology have begun to improve in performance to the point where ASIC and FPGA performance has become comparable. In addition, FPGAs address many of the issues of the ASIC design flow. The ability to reconfigure FPGAs reduces the upfront verification effort and allows design defects to be fixed easily. This thesis demonstrates that a 3-D graphics processor implementation on and FPGA is feasible by implementing both a two dimensional and three dimensional graphics processor prototype. By using a Xilinx Virtex 5 ML506 FPGA development kit a fully functional wireframe graphics rendering engine is implemented using VHDL and Xilinx's development tools. A VHDL testbench was designed to verify that the graphics engine works functionally. This is followed by synthesizing the design and real hardware and developing test applications to verify functionality and performance of the design. This thesis provides the ground work for push forward the use of FPGA technology in graphics processing applications

    H-SIMD machine : configurable parallel computing for data-intensive applications

    Get PDF
    This dissertation presents a hierarchical single-instruction multiple-data (H-SLMD) configurable computing architecture to facilitate the efficient execution of data-intensive applications on field-programmable gate arrays (FPGAs). H-SIMD targets data-intensive applications for FPGA-based system designs. The H-SIMD machine is associated with a hierarchical instruction set architecture (HISA) which is developed for each application. The main objectives of this work are to facilitate ease of program development and high performance through ease of scheduling operations and overlapping communications with computations. The H-SIMD machine is composed of the host, FPGA and nano-processor layers. They execute host SIMD instructions (HSIs), FPGA SIMD instructions (FSIs) and nano-processor instructions (NPLs), respectively. A distinction between communication and computation instructions is intended for all the HISA layers. The H-SIMD machine also employs a memory switching scheme to bridge the omnipresent large bandwidth gaps in configurable systems. To showcase the proposed high-performance approach, the conditions to fully overlap communications with computations are investigated for important applications. The building blocks in the H-SLMD machine, such as high-performance and area-efficient register files, are presented in detail. The H-SLMD machine hierarchy is implemented on a host Dell workstation and the Annapolis Wildstar II FPGA board. Significant speedups have been achieved for matrix multiplication (MM), 2-dimensional discrete cosine transform (2D DCT) and 2-dimensional fast Fourier transform (2D FFT) which are used widely in science and engineering. In another FPGA-based programming paradigm, a high-level language (here ANSI C) can be used to program the FPGAs in a mode similar to that of the H-SIMD machine in terms of trying to minimize the effect of overheads. More specifically, a multi-threaded overlapping scheme is proposed to reduce as much as possible, or even completely hide, runtime FPGA reconfiguration overheads. Nevertheless, although the HLL-enabled reconfigurable machine allows software developers to customize FPGA functions easily, special architecture techniques are needed to achieve high-performance without significant penalty on area and clock frequency. Two important high-performance applications, matrix multiplication and image edge detection, are tested on the SRC-6 reconfigurable machine. The implemented algorithms are able to exploit the available data parallelism with independent functional units and application-specific cache support. Relevant performance and design tradeoffs are analyzed

    Efficient floating-point givens rotation unit

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in Circuits, Systems, and Signal Processing.High-throughput QR decomposition is a key operation in many advanced signal processing and communication applications. For some of these applications, using floating-point computation is becoming almost compulsory. However, there are scarce works in hardware implementations of floating-point QR decomposition for embedded systems. In this paper, we propose a very efficient high-throughput floating-point Givens rotation unit for QR decomposition. Moreover, the initial proposed design for conventional number formats is enhanced by using the new Half-Unit Biased format. The provided error analysis shows the effectiveness of our proposals and the trade-off of different implementation parameters. We also present FPGA implementation results and a thorough comparison between both approaches. These implementation results also reveal outstanding improvements compared to other previous similar designs in terms of area, latency, and throughput.This work was supported in part by following Spanish projects: TIN2016-80920-R, and JA2012 P12-TIC-169

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    Automated Dynamic Error Analysis Methods for Optimization of Computer Arithmetic Systems

    Get PDF
    Computer arithmetic is one of the more important topics within computer science and engineering. The earliest implementations of computer systems were designed to perform arithmetic operations and cost if not all digital systems will be required to perform some sort of arithmetic as part of their normal operations. This reliance on the arithmetic operations of computers means the accurate representation of real numbers within digital systems is vital, and an understanding of how these systems are implemented and their possible drawbacks is essential in order to design and implement modern high performance systems. At present the most widely implemented system for computer arithmetic is the IEEE754 Floating Point system, while this system is deemed to the be the best available implementation it has several features that can result in serious errors of computation if not implemented correctly. Lack of understanding of these errors and their effects has led to real world disasters in the past on several occasions. Systems for the detection of these errors are highly important and fast, efficient and easy to use implementations of these detection systems is a high priority. Detection of floating point rounding errors normally requires run-time analysis in order to be effective. Several systems have been proposed for the analysis of floating point arithmetic including Interval Arithmetic, Affine Arithmetic and Monte Carlo Arithmetic. While these systems have been well studied using theoretical and software based approaches, implementation of systems that can be applied to real world situations has been limited due to issues with implementation, performance and scalability. The majority of implementations have been software based and have not taken advantage of the performance gains associated with hardware accelerated computer arithmetic systems. This is especially problematic when it is considered that systems requiring high accuracy will often require high performance. The aim of this thesis and associated research is to increase understanding of error and error analysis methods through the development of easy to use and easy to understand implementations of these techniques

    Integrated Programmable-Array accelerator to design heterogeneous ultra-low power manycore architectures

    Get PDF
    There is an ever-increasing demand for energy efficiency (EE) in rapidly evolving Internet-of-Things end nodes. This pushes researchers and engineers to develop solutions that provide both Application-Specific Integrated Circuit-like EE and Field-Programmable Gate Array-like flexibility. One such solution is Coarse Grain Reconfigurable Array (CGRA). Over the past decades, CGRAs have evolved and are competing to become mainstream hardware accelerators, especially for accelerating Digital Signal Processing (DSP) applications. Due to the over-specialization of computing architectures, the focus is shifting towards fitting an extensive data representation range into fewer bits, e.g., a 32-bit space can represent a more extensive data range with floating-point (FP) representation than an integer representation. Computation using FP representation requires numerous encodings and leads to complex circuits for the FP operators, decreasing the EE of the entire system. This thesis presents the design of an EE ultra-low-power CGRA with native support for FP computation by leveraging an emerging paradigm of approximate computing called transprecision computing. We also present the contributions in the compilation toolchain and system-level integration of CGRA in a System-on-Chip, to envision the proposed CGRA as an EE hardware accelerator. Finally, an extensive set of experiments using real-world algorithms employed in near-sensor processing applications are performed, and results are compared with state-of-the-art (SoA) architectures. It is empirically shown that our proposed CGRA provides better results w.r.t. SoA architectures in terms of power, performance, and area

    FloatX: A C++ Library for Customized Floating-Point Arithmetic

    Full text link
    "© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Mathematical Software, {45, 4, (2019)} https://dl.acm.org/doi/10.1145/3368086"[EN] We present FloatX (Float eXtended), a C++ framework to investigate the effect of leveraging customized floating-point formats in numerical applications. FloatX formats are based on binary IEEE 754 with smaller significand and exponent bit counts specified by the user. Among other properties, FloatX facilitates an incremental transformation of the code, relies on hardware-supported floating-point types as back-end to preserve efficiency, and incurs no storage overhead. The article discusses in detail the design principles, programming interface, and datatype casting rules behind FloatX. Furthermore, it demonstrates FloatX's usage and benefits via several case studies from well-known numerical dense linear algebra libraries, such as BLAS and LAPACK; the Ginkgo library for sparse linear systems; and two neural network applications related with image processing and text recognition.This work was supported by the CICYT projects TIN2014-53495-R and TIN2017-82972-R of the MINECO and FEDER, and the EU H2020 project 732631 "OPRECOMP. Open Transprecision Computing."Flegar, G.; Scheidegger, F.; Novakovic, V.; Mariani, G.; Tomás Domínguez, AE.; Malossi, C.; Quintana-Ortí, ES. (2019). FloatX: A C++ Library for Customized Floating-Point Arithmetic. ACM Transactions on Mathematical Software. 45(4):1-23. https://doi.org/10.1145/3368086S123454Edward Anderson Zhaojun Bai L. Susan Blackford James Demmesl Jack J. Dongarra Jeremy Du Croz Sven Hammarling Anne Greenbaum Alan McKenney and Danny C. Sorensen. 1999. LAPACK Users’ Guide (3rd ed.). SIAM. Edward Anderson Zhaojun Bai L. Susan Blackford James Demmesl Jack J. Dongarra Jeremy Du Croz Sven Hammarling Anne Greenbaum Alan McKenney and Danny C. Sorensen. 1999. LAPACK Users’ Guide (3rd ed.). SIAM.Bekas, C., Curioni, A., & Fedulova, I. (2011). Low-cost data uncertainty quantification. Concurrency and Computation: Practice and Experience, 24(8), 908-920. doi:10.1002/cpe.1770Boldo, S., & Melquiond, G. (2008). Emulation of a FMA and Correctly Rounded Sums: Proved Algorithms Using Rounding to Odd. IEEE Transactions on Computers, 57(4), 462-471. doi:10.1109/tc.2007.70819Buttari, A., Dongarra, J., Langou, J., Langou, J., Luszczek, P., & Kurzak, J. (2007). Mixed Precision Iterative Refinement Techniques for the Solution of Dense Linear Systems. The International Journal of High Performance Computing Applications, 21(4), 457-466. doi:10.1177/1094342007084026Dongarra, J. J., Du Croz, J., Hammarling, S., & Duff, I. S. (1990). A set of level 3 basic linear algebra subprograms. ACM Transactions on Mathematical Software, 16(1), 1-17. doi:10.1145/77626.79170Figueroa, S. A. (1995). When is double rounding innocuous? ACM SIGNUM Newsletter, 30(3), 21-26. doi:10.1145/221332.221334Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., & Zimmermann, P. (2007). MPFR. ACM Transactions on Mathematical Software, 33(2), 13. doi:10.1145/1236463.1236468Mark Gates Piotr Luszczek Ahmad Abdelfattah Jakub Kurzak Jack Dongarra Konstantin Arturov Cris Cecka and Chip Freitag. 2017. C++ API for BLAS and LAPACK. Technical Report 2 ICL-UT-17-03. Mark Gates Piotr Luszczek Ahmad Abdelfattah Jakub Kurzak Jack Dongarra Konstantin Arturov Cris Cecka and Chip Freitag. 2017. C++ API for BLAS and LAPACK. Technical Report 2 ICL-UT-17-03.John Hauser. Accessed March 2019. Berkeley SoftFloat project home page. Retrieved from http://www.jhauser.us/arithmetic/SoftFloat.html. John Hauser. Accessed March 2019. Berkeley SoftFloat project home page. Retrieved from http://www.jhauser.us/arithmetic/SoftFloat.html.Nicholas J. Higham. 2002. Accuracy and Stability of Numerical Algorithms (2nd ed.). Society for Industrial and Applied Mathematics Philadelphia PA. Nicholas J. Higham. 2002. Accuracy and Stability of Numerical Algorithms (2nd ed.). Society for Industrial and Applied Mathematics Philadelphia PA.Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios Papaefthymiou Scott Mahlke Thomas Wenisch Jia Deng Lingjia Tang and Jason Mars. 2018. Rethinking numerical representations for deep neural networks. arXiv e-prints (Aug 2018). arXiv:1808.02513. Retrieved from https://openreview.net/forum?id&equals;BJ_MGwqlg8noteId&equals;BJ_MGwqlg. Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios Papaefthymiou Scott Mahlke Thomas Wenisch Jia Deng Lingjia Tang and Jason Mars. 2018. Rethinking numerical representations for deep neural networks. arXiv e-prints (Aug 2018). arXiv:1808.02513. Retrieved from https://openreview.net/forum?id&equals;BJ_MGwqlg8noteId&equals;BJ_MGwqlg.Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios Papaefthymiou Scott Mahlke Thomas Wenisch Jia Deng etal 2018. Rethinking numerical representations for deep neural networks. 2018. Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios Papaefthymiou Scott Mahlke Thomas Wenisch Jia Deng et al. 2018. Rethinking numerical representations for deep neural networks. 2018.IBM. 2015. Engineering and Scientific Subroutine Library. Retrieved from http://www-03.ibm.com/systems/power/software/essl/. IBM. 2015. Engineering and Scientific Subroutine Library. Retrieved from http://www-03.ibm.com/systems/power/software/essl/.IEEE. 2008. IEEE Standard for Floating-point Arithmetic. IEEE Std 754-2008 (Aug. 2008) 1--70. DOI:https://doi.org/10.1109/IEEESTD.2008.4610935 IEEE. 2008. IEEE Standard for Floating-point Arithmetic. IEEE Std 754-2008 (Aug. 2008) 1--70. DOI:https://doi.org/10.1109/IEEESTD.2008.4610935Intel. 2015. Math Kernel Library. Retrieved from https://software.intel.com/en-us/intel-mkl. Intel. 2015. Math Kernel Library. Retrieved from https://software.intel.com/en-us/intel-mkl.ISO. 2017. ISO International Standard ISO/IEC 14882:2017(E)—Programming Language C++. Retrieved from https://isocpp.org/std/the-standard. Visited June 2018. ISO. 2017. ISO International Standard ISO/IEC 14882:2017(E)—Programming Language C++. Retrieved from https://isocpp.org/std/the-standard. Visited June 2018.Lefevre, V. (2013). SIPE: Small Integer Plus Exponent. 2013 IEEE 21st Symposium on Computer Arithmetic. doi:10.1109/arith.2013.22Liu, Z., Luo, P., Wang, X., & Tang, X. (2015). Deep Learning Face Attributes in the Wild. 2015 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2015.425Érik Martin-Dorel Guillaume Melquiond and Jean-Michel Muller. 2013. Some issues related to double rounding. BIT Num. Math. 53 4 (01 Dec. 2013) 897--924. DOI:https://doi.org/10.1007/s10543-013-0436-2 Érik Martin-Dorel Guillaume Melquiond and Jean-Michel Muller. 2013. Some issues related to double rounding. BIT Num. Math. 53 4 (01 Dec. 2013) 897--924. DOI:https://doi.org/10.1007/s10543-013-0436-2Sparsh Mittal. 2016. A survey of techniques for approximate computing. ACM Comput. Surv. 48 4 Article 62 (Mar. 2016) 33 pages. DOI:https://doi.org/10.1145/2893356 Sparsh Mittal. 2016. A survey of techniques for approximate computing. ACM Comput. Surv. 48 4 Article 62 (Mar. 2016) 33 pages. DOI:https://doi.org/10.1145/2893356NVIDIA. 2016. cuBLAS. Retrieved from https://developer.nvidia.com/cublas. NVIDIA. 2016. cuBLAS. Retrieved from https://developer.nvidia.com/cublas.D. O’Leary. 2006. Matrix factorization for information retrieval. Lecture notes for a course on Advanced Numerical Analysis. University of Maryland. Retrieved from https://www.cs.umd.edu/users/oleary/a600/yahoo.pdf. D. O’Leary. 2006. Matrix factorization for information retrieval. Lecture notes for a course on Advanced Numerical Analysis. University of Maryland. Retrieved from https://www.cs.umd.edu/users/oleary/a600/yahoo.pdf.OpenBLAS. 2015. Retrieved from http://www.openblas.net. OpenBLAS. 2015. Retrieved from http://www.openblas.net.Palmer, T. (2015). Modelling: Build imprecise supercomputers. Nature, 526(7571), 32-33. doi:10.1038/526032aAlec Radford Luke Metz and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. Retrieved from Arxiv Preprint Arxiv:1511.06434 (2015). Alec Radford Luke Metz and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. Retrieved from Arxiv Preprint Arxiv:1511.06434 (2015).Rubio-González, C., Nguyen, C., Nguyen, H. D., Demmel, J., Kahan, W., Sen, K., … Hough, D. (2013). Precimonious. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC ’13. doi:10.1145/2503210.2503296Rump, S. M. (2017). IEEE754 Precision- k base-β Arithmetic Inherited by Precision- m Base-β Arithmetic for k < m. ACM Transactions on Mathematical Software, 43(3), 1-15. doi:10.1145/2785965Rybalkin, V., Wehn, N., Yousefi, M. R., & Stricker, D. (2017). Hardware architecture of Bidirectional Long Short-Term Memory Neural Network for Optical Character Recognition. Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. doi:10.23919/date.2017.7927210Giuseppe Tagliavini Stefan Mach Davide Rossi Andrea Marongiu and Luca Benini. 2017. A transprecision floating-point platform for ultra-low power computing. Retrieved from Arxiv Preprint Arxiv:1711.10374 (2017). Giuseppe Tagliavini Stefan Mach Davide Rossi Andrea Marongiu and Luca Benini. 2017. A transprecision floating-point platform for ultra-low power computing. Retrieved from Arxiv Preprint Arxiv:1711.10374 (2017).Tobias Thornes. (2016). Can reducing precision improve accuracy in weather and climate models? Weather 71 6 (02 June 2016) 147--150. DOI:https://doi.org/10.1002/wea.2732 Tobias Thornes. (2016). Can reducing precision improve accuracy in weather and climate models? Weather 71 6 (02 June 2016) 147--150. DOI:https://doi.org/10.1002/wea.2732Van Zee, F. G., & van de Geijn, R. A. (2015). BLIS: A Framework for Rapidly Instantiating BLAS Functionality. ACM Transactions on Mathematical Software, 41(3), 1-33. doi:10.1145/2764454Todd L. Veldhuizen. 2003. C++ Templates are Turing Complete. Technical Report. Todd L. Veldhuizen. 2003. C++ Templates are Turing Complete. Technical Report.Qiang Xu Todd Mytkowicz and Nam Sung Kim. 2015. Approximate computing: A survey. IEEE Des. Test 33 (01 2015) 8--22. Qiang Xu Todd Mytkowicz and Nam Sung Kim. 2015. Approximate computing: A survey. IEEE Des. Test 33 (01 2015) 8--22.Ziv, A. (1991). Fast evaluation of elementary mathematical functions with correctly rounded last bit. ACM Transactions on Mathematical Software, 17(3), 410-423. doi:10.1145/114697.11681

    PERCIVAL: Open-source posit RISC-V core with quire capability

    Get PDF
    The posit representation for real numbers is an alternative to the ubiquitous IEEE 754 floating-point standard. In this work, we present PERCIVAL, an application-level posit RISC-V core based on CVA6 that can execute all posit instructions, including the quire fused operations. This solves the obstacle encountered by previous works, which only included partial posit support or which had to emulate posits in software. In addition, Xposit, a RISC-V extension for posit instructions is incorporated into LLVM. Therefore, PERCIVAL is the first work that integrates the complete posit instruction set in hardware. These elements allow for the native execution of posit instructions as well as the standard floating-point ones, further permitting the comparison of these representations. FPGA and ASIC synthesis show the hardware cost of implementing 32-bit posits and highlight the significant overhead of including a quire accumulator. However, results show that the quire enables a more accurate execution of dot products. In general matrix multiplications, the accuracy error is reduced up to 4 orders of magnitude. Furthermore, performance comparisons show that these accuracy improvements do not hinder their execution, as posits run as fast as single-precision floats and exhibit better timing than double-precision floats, thus potentially providing an alternative representation
    • …
    corecore