1,495 research outputs found

    A general framework for efficient FPGA implementation of matrix product

    Get PDF
    Original article can be found at: http://www.medjcn.com/ Copyright Softmotor LimitedHigh performance systems are required by the developers for fast processing of computationally intensive applications. Reconfigurable hardware devices in the form of Filed-Programmable Gate Arrays (FPGAs) have been proposed as viable system building blocks in the construction of high performance systems at an economical price. Given the importance and the use of matrix algorithms in scientific computing applications, they seem ideal candidates to harness and exploit the advantages offered by FPGAs. In this paper, a system for matrix algorithm cores generation is described. The system provides a catalog of efficient user-customizable cores, designed for FPGA implementation, ranging in three different matrix algorithm categories: (i) matrix operations, (ii) matrix transforms and (iii) matrix decomposition. The generated core can be either a general purpose or a specific application core. The methodology used in the design and implementation of two specific image processing application cores is presented. The first core is a fully pipelined matrix multiplier for colour space conversion based on distributed arithmetic principles while the second one is a parallel floating-point matrix multiplier designed for 3D affine transformations.Peer reviewe

    Computación paralela heterogénea en registro de imágenes y aplicaciones de álgebra lineal

    Get PDF
    This doctoral thesis focuses on GPU acceleration of medical image registration and sparse general matrix-matrix multiplication (SpGEMM). The comprehensive work presented here aims to enable new possibilities in Image Guided Surgery (IGS). IGS provides the surgeon with advanced navigation tools during surgery. Image registration, which is a part of IGS, is computationally demanding, therefore GPU acceleration is greatly desirable. spGEMM, which is an essential part in many scientific and data analytics applications, e.g., graph applications, is also a useful tool in biomechanical modeling and sparse vessel network registration. We present this work in two parts. The first part of this thesis describes the optimization of the most demanding part of non-rigid Free Form Deformation registration, i.e., B-spline interpolation. Our novel optimization technique minimizes the data movement between processing cores and memory and maximizes the utilization of the very fast register file. In addition, our approach re-formulates B-spline interpolation to fully utilize Fused Multiply Accumulation instructions for additional benefits in performance and accuracy. Our optimized B-spline interpolation provides significant speedup to image registration. The second part describes the optimization of spGEMM. Hardware manufacturers, with the aim of increasing the performance of deep-learning, created specialized dense matrix multiplication units, called Tensor Core Units (TCUs). However, until now, no work takes advantage of TCUs for sparse matrix multiplication. With this work we provide the first TCU implementation of spGEMM and prove its benefits over conventional GPU spGEMM.Esta tesis doctoral se centra en la aceleración por GPU del registro de imágenes médicas y la multiplicación de matrices dispersas (SpGEMM). El exhaustivo trabajo presentado aquí tiene como objetivo permitir nuevas posibilidades en la cirugía guiada por imagen (IGS). IGS proporciona al cirujano herramientas de navegación avanzadas durante la cirugía. El registro de imágenes, parte de IGS computacionalmente exigente, por lo tanto, la aceleración en GPU es muy deseable. spGEMM, la cual es una parte esencial en muchas aplicaciones científicas y de análisis de datos, por ejemplo, aplicaciones de gráficos, también es una herramienta útil en el modelado biomecánico y el registro de redes de vasos dispersos. Presentamos este trabajo en dos partes. La primera parte de esta tesis describe la optimización de la parte más exigente del registro de deformación de forma libre no rígida, es decir, la interpolación B-spline. Nuestra novedosa técnica de optimización minimiza el movimiento de datos entre los núcleos de procesamiento y la memoria y maximiza la utilización del archivo de registro rápido. Además, nuestro enfoque reformula la interpolación B-spline para utilizar completamente las instrucciones de multiplicación-acumulación fusionada (FMAC) para obtener beneficios adicionales en rendimiento y precisión. Nuestra interpolación B-spline optimizada proporciona una aceleración significativa en el registro de imágenes. La segunda parte describe la optimización de spGEMM. Los fabricantes de hardware, con el objetivo de aumentar el rendimiento del aprendizaje profundo, crearon unidades especializadas de multiplicación de matrices densas, llamadas Tensor Core Units (TCU). Sin embargo, hasta ahora, no se ha encontrado ningún trabajo aprovecha las TCU para la multiplicación de matrices dispersas. Con este trabajo, proporcionamos la primera implementación TCU de spGEMM y demostramos sus beneficios sobre la spGEMM convencional operada sobre dispositivos GPU

    GSWO: A Programming Model for GPU-enabled Parallelization of Sliding Window Operations in Image Processing

    Get PDF
    Sliding Window Operations (SWOs) are widely used in image processing applications. They often have to be performed repeatedly across the target image, which can demand significant computing resources when processing large images with large windows. In applications in which real-time performance is essential, running these filters on a CPU often fails to deliver results within an acceptable timeframe. The emergence of sophisticated graphic processing units (GPUs) presents an opportunity to address this challenge. However, GPU programming requires a steep learning curve and is error-prone for novices, so the availability of a tool that can produce a GPU implementation automatically from the original CPU source code can provide an attractive means by which the GPU power can be harnessed effectively. This paper presents a GPUenabled programming model, called GSWO, which can assist GPU novices by converting their SWO-based image processing applications from the original C/C++ source code to CUDA code in a highly automated manner. This model includes a new set of simple SWO pragmas to generate GPU kernels and to support effective GPU memory management. We have implemented this programming model based on a CPU-to-GPU translator (C2GPU). Evaluations have been performed on a number of typical SWO image filters and applications. The experimental results show that the GSWO model is capable of efficiently accelerating these applications, with improved applicability and a speed-up of performance compared to several leading CPU-to- GPU source-to-source translators

    FPGA acceleration of DNA sequence alignment: design analysis and optimization

    Get PDF
    Existing FPGA accelerators for short read mapping often fail to utilize the complete biological information in sequencing data for simple hardware design, leading to missed or incorrect alignment. In this work, we propose a runtime reconfigurable alignment pipeline that considers all information in sequencing data for the biologically accurate acceleration of short read mapping. We focus our efforts on accelerating two string matching techniques: FM-index and the Smith-Waterman algorithm with the affine-gap model which are commonly used in short read mapping. We further optimize the FPGA hardware using a design analyzer and merger to improve alignment performance. The contributions of this work are as follows. 1. We accelerate the exact-match and mismatch alignment by leveraging the FM-index technique. We optimize memory access by compressing the data structure and interleaving the access with multiple short reads. The FM-index hardware also considers complete information in the read data to maximize accuracy. 2. We propose a seed-and-extend model to accelerate alignment with indels. The FM-index hardware is extended to support the seeding stage while a Smith-Waterman implementation with the affine-gap model is developed on FPGA for the extension stage. This model can improve the efficiency of indel alignment with comparable accuracy versus state-of-the-art software. 3. We present an approach for merging multiple FPGA designs into a single hardware design, so that multiple place-and-route tasks can be replaced by a single task to speed up functional evaluation of designs. We first experiment with this approach to demonstrate its feasibility for different designs. Then we apply this approach to optimize one of the proposed FPGA aligners for better alignment performance.Open Acces

    Application-Specific Number Representation

    No full text
    Reconfigurable devices, such as Field Programmable Gate Arrays (FPGAs), enable application- specific number representations. Well-known number formats include fixed-point, floating- point, logarithmic number system (LNS), and residue number system (RNS). Such different number representations lead to different arithmetic designs and error behaviours, thus produc- ing implementations with different performance, accuracy, and cost. To investigate the design options in number representations, the first part of this thesis presents a platform that enables automated exploration of the number representation design space. The second part of the thesis shows case studies that optimise the designs for area, latency or throughput from the perspective of number representations. Automated design space exploration in the first part addresses the following two major issues: ² Automation requires arithmetic unit generation. This thesis provides optimised arithmetic library generators for logarithmic and residue arithmetic units, which support a wide range of bit widths and achieve significant improvement over previous designs. ² Generation of arithmetic units requires specifying the bit widths for each variable. This thesis describes an automatic bit-width optimisation tool called R-Tool, which combines dynamic and static analysis methods, and supports different number systems (fixed-point, floating-point, and LNS numbers). Putting it all together, the second part explores the effects of application-specific number representation on practical benchmarks, such as radiative Monte Carlo simulation, and seismic imaging computations. Experimental results show that customising the number representations brings benefits to hardware implementations: by selecting a more appropriate number format, we can reduce the area cost by up to 73.5% and improve the throughput by 14.2% to 34.1%; by performing the bit-width optimisation, we can further reduce the area cost by 9.7% to 17.3%. On the performance side, hardware implementations with customised number formats achieve 5 to potentially over 40 times speedup over software implementations

    FPGA Accelerated Discrete-SURF for Real-Time Homography Estimation

    Get PDF
    This paper describes our hardware accelerated, FPGA implementation of SURF, named Discrete SURF, to support real-time homography estimation for close range aerial navigation. The SURF algorithm provides feature matches between a model and a scene which can be used to find the transformation between the camera and the model. Previous implementations of SURF have partially employed FPGAs to accelerate the feature detection stage of upright only image comparisons. We extend the work of previous implementations by providing an FPGA implementation that allows rotation during image comparisons in order to facilitate aerial navigation. We also expand beyond feature detection as the complete Discrete SURF algorithm is run on the FPGA, rather than piped into processors. This not only minimizes overhead and increases the parallelization of the algorithm, but also allows the algorithm to be easily ported to different FPGAs. Furthermore, the Discrete SURF module is a logic-only implementation that does not rely on external hardware which therefore decreases the overall size, weight and power of the device while also allowing for easy FPGA to ASIC conversion. We evaluate the Discrete SURF algorithm in terms of performance against the original SURF and upright SURF algorithms implemented in OpenCV. Finally, we show how Discrete SURF is more compatible with an aerial navigation scenario than previous works, since rotation invariance must be considered in addition to scale
    corecore