212 research outputs found

    Rosko: Row Skipping Outer Products for Sparse Matrix Multiplication Kernels

    Full text link
    We propose Rosko -- row skipping outer products -- for deriving sparse matrix multiplication (SpMM) kernels in reducing computation and memory access requirements of deep neural networks (DNNs). Rosko allows skipping of entire row computations during program execution with low sparsity-management overheads. We analytically derive sparse CPU kernels that adapt to given hardware characteristics to effectively utilize processor cores and minimize data movement without the need for auto-tuning or search space exploration. Rosko can be integrated with other outer product scheduling methods, allowing them to leverage row skipping by using Rosko's packing format to skip unnecessary computation. Rosko kernels outperform existing auto-tuning and search-based solutions as well as state-of-the-art vendor-optimized libraries on real hardware across a variety of neural network workloads. For matrices with sparsities ranging from 65% to 99.8% typically found in machine learning, Rosko kernels achieve up to a 6.5x runtime reduction on Intel and ARM CPUs.Comment: Rosko's CPU implementation can be found at https://github.com/vnatesh/Rosk

    SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference

    Full text link
    CNNs outperform traditional machine learning algorithms across a wide range of applications. However, their computational complexity makes it necessary to design efficient hardware accelerators. Most CNN accelerators focus on exploring dataflow styles that exploit computational parallelism. However, potential performance speedup from sparsity has not been adequately addressed. The computation and memory footprint of CNNs can be significantly reduced if sparsity is exploited in network evaluations. To take advantage of sparsity, some accelerator designs explore sparsity encoding and evaluation on CNN accelerators. However, sparsity encoding is just performed on activation or weight and only in inference. It has been shown that activation and weight also have high sparsity levels during training. Hence, sparsity-aware computation should also be considered in training. To further improve performance and energy efficiency, some accelerators evaluate CNNs with limited precision. However, this is limited to the inference since reduced precision sacrifices network accuracy if used in training. In addition, CNN evaluation is usually memory-intensive, especially in training. In this paper, we propose SPRING, a SParsity-aware Reduced-precision Monolithic 3D CNN accelerator for trainING and inference. SPRING supports both CNN training and inference. It uses a binary mask scheme to encode sparsities in activation and weight. It uses the stochastic rounding algorithm to train CNNs with reduced precision without accuracy loss. To alleviate the memory bottleneck in CNN evaluation, especially in training, SPRING uses an efficient monolithic 3D NVM interface to increase memory bandwidth. Compared to GTX 1080 Ti, SPRING achieves 15.6X, 4.2X and 66.0X improvements in performance, power reduction, and energy efficiency, respectively, for CNN training, and 15.5X, 4.5X and 69.1X improvements for inference

    ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง FPGA ๊ฐ€์†๊ธฐ๋ฅผ ์œ„ํ•œ ๋ ˆ์ด์–ด ๊ฐ๋„์— ๋”ฐ๋ฅธ ์ ์‘ํ˜• ๋„คํŠธ์›Œํฌ ์••์ถ• ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2020. 8. Bernhard Egger.Systolic ๋ฐฐ์—ด์— ๊ธฐ๋ฐ˜ํ•œ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง ๊ฐ€์†๊ธฐ๋Š” ์ ์€ ์—๋„ˆ์ง€ ์†Œ๋น„์™€ ๋†’์€ ์ฒ˜๋ฆฌ๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•ด์ค€๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ผ๋ฐ˜์ ์ธ systolic ๋ฐฐ์—ด์˜ ๊ตฌ์กฐ๋Š” ์‹ ๊ฒฝ๋ง์˜ ํšจ์œจ์ ์ธ ์••์ถ•๊ณผ pruning์„ ์–ด๋ ต๊ฒŒ ๋งŒ๋“ ๋‹ค. ๋‘ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋“ค์€ ์‹ ๊ฒฝ๋ง์˜ ์‹œ๊ฐ„๋ณต์žก๋„์™€ ์ €์žฅ๊ณต๊ฐ„์„ ํฌ๊ฒŒ ๊ฐ์†Œ์‹œํ‚จ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—๋Š”, ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง ์ถ”๋ก ์„ ์œ„ํ•œ FPGA ๊ธฐ๋ฐ˜ ๊ณ ์† ๊ฐ€์†๊ธฐ์ธ AIX๋ฅผ ์†Œ๊ฐœํ•˜๊ณ , systolic ๋ฐฐ์—ด์„ ์œ„ํ•œ ํšจ์œจ์ ์ธ pruning ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ ํƒ๊ตฌํ•œ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ AIX์˜ ์‹คํ–‰ ๋ชจ๋ธ์„ ๊ณ ๋ คํ•˜๋ฉฐ, ์‹ ๊ฒฝ๋ง์˜ ํฌ๊ธฐ๋ฅผ ์ค„์—ฌ ๋‚˜๊ฐ„๋‹ค. ๋˜ํ•œ, ๋…๋ฆฝ์ ์œผ๋กœ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ์ธต ๋‚ด ๊ณ ์ •๋œ ํฌ๊ธฐ์˜ ๋ธ”๋ก์„ ์ œ๊ฑฐํ•จ์œผ๋กœ์จ, AIX ๊ฐ€์†๊ธฐ์˜ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง์˜ ์‹คํ–‰์‹œ๊ฐ„์„ ์ง์ ‘์ ์œผ๋กœ ๋‹จ์ถ•์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. YOLOv1, YOLOv2 ๋ฐ Tiny-YOLOv2์™€ ๊ฐ™์€ ๋Œ€ํ‘œ์ ์ธ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง์— ์ ์šฉํ•˜์˜€๊ณ , ์ œ์‹œ๋œ ๊ธฐ์ˆ ์€ ์ตœ์‹  ์••์ถ•๋ฅ ์„ ๋‹ฌ์„ฑํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ, YOLOv2๋ฅผ ์ตœ์†Œํ•œ์˜ ์ •ํ™•๋„ ์†์‹ค ๋กœ ์ถ”๋ก  ์‹œ๊ฐ„์„ 1.6 ๋ฐฐ๋กœ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.Deep neural network (DNN) accelerators based on systolic arrays have been shown to achieve a high throughput at a low energy consumption. The regular architecture of the systolic array, however, makes it difficult to effectively apply network pruning and compression; two important optimization techniques that can significantly reduce the computational complexity and the storage requirements of a network. This work presents AIX, an FPGA-based high-speed accelerator for DNN inference, and explores effective methods for pruning systolic arrays. The techniques consider the execution model of the AIX and prune the individual convolutional layers of a network in fixed sized blocks that not only reduce the weights of the network but also translate directly into a reduction of the execution time of a convolutional neural network (CNN) on the AIX. Applied to representative CNNs such as YOLOv1, YOLOv2 and Tiny-YOLOv2, the presented techniques achieve state-of-the-art compression ratios and are able to reduce interference latency by a factor of two at a minimal loss of accuracy.Chapter 1 Introduction and Motivation 1 Chapter 2 Background 4 1 Object Detection 4 1.1 mean Average Precision (mAP) 4 1.2 YOLOv2 6 2 AIX Accelerator 7 2.1 Overview of AIX Architecture 7 2.2 Dataflow of AIX Architecture 9 Chapter 3 Implementation of Pruning on AIX Accelerator 12 3.1 Convolutional Neural Network (CNN) 12 3.2 Granularity of Sparsity for Pruning CNNs 13 3.3 Network Compression for Channel Pruning 15 3.4 CNN Pruning on AIX Accelerator 16 3.4.1 Block-Granularity for Pruning 16 3.4.2 Network Compression for Block Pruning 18 Chapter 4 Adaptive Layer Sensitivity Pruning 19 4.1 Overview 19 4.2 Layer Sensitivity Graph 20 4.3 Concept of Adaptive Layer Sensitivity Pruning Algorithm 22 4.4 Discussion on Adaptive Layer Sensitivity Pruning Algorithm 23 4.5 Compression for YOLOv2 multi-branches 24 4.6 Fine-tune 26 Chapter 5 Experimental Setup 28 Chapter 6 Experimental Results 30 6.1 Overall Results 30 6.2 Effect of Adaptive Layer Sensitivity Pruning 31 6.3 Comparision Adaptive vs Static Layer Sensitivity Pruning 33 Chapter 7 Related Work 35 Chapter 8 Conclusion and Future Work 37 8.1 Conclusion 37 8.2 Future Work 38 Bibliography 40Maste
    • โ€ฆ
    corecore