245 research outputs found

    Not your Grandpa's SSD: The Era of Co-Designed Storage Devices

    Get PDF

    Emerging accelerator platforms for data centers

    Get PDF
    CPU and GPU platforms may not be the best options for many emerging compute patterns, which led to a new breed of emerging accelerator platforms. This article gives a comprehensive overview with a focus on commercial platforms

    Doctor of Philosophy

    Get PDF
    dissertationDeep Neural Networks (DNNs) are the state-of-art solution in a growing number of tasks including computer vision, speech recognition, and genomics. However, DNNs are computationally expensive as they are carefully trained to extract and abstract features from raw data using multiple layers of neurons with millions of parameters. In this dissertation, we primarily focus on inference, e.g., using a DNN to classify an input image. This is an operation that will be repeatedly performed on billions of devices in the datacenter, in self-driving cars, in drones, etc. We observe that DNNs spend a vast majority of their runtime to runtime performing matrix-by-vector multiplications (MVM). MVMs have two major bottlenecks: fetching the matrix and performing sum-of-product operations. To address these bottlenecks, we use in-situ computing, where the matrix is stored in programmable resistor arrays, called crossbars, and sum-of-product operations are performed using analog computing. In this dissertation, we propose two hardware units, ISAAC and Newton.In ISAAC, we show that in-situ computing designs can outperform DNN digital accelerators, if they leverage pipelining, smart encodings, and can distribute a computation in time and space, within crossbars, and across crossbars. In the ISAAC design, roughly half the chip area/power can be attributed to the analog-to-digital conversion (ADC), i.e., it remains the key design challenge in mixed-signal accelerators for deep networks. In spite of the ADC bottleneck, ISAAC is able to out-perform the computational efficiency of the state-of-the-art design (DaDianNao) by 8x. In Newton, we take advantage of a number of techniques to address ADC inefficiency. These techniques exploit matrix transformations, heterogeneity, and smart mapping of computation to the analog substrate. We show that Newton can increase the efficiency of in-situ computing by an additional 2x. Finally, we show that in-situ computing, unfortunately, cannot be easily adapted to handle training of deep networks, i.e., it is only suitable for inference of already-trained networks. By improving the efficiency of DNN inference with ISAAC and Newton, we move closer to low-cost deep learning that in turn will have societal impact through self-driving cars, assistive systems for the disabled, and precision medicine

    Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications

    Get PDF
    With the advent of dedicated Deep Learning (DL) accelerators and neuromorphic processors, new opportunities are emerging for applying deep and Spiking Neural Network (SNN) algorithms to healthcare and biomedical applications at the edge. This can facilitate the advancement of the medical Internet of Things (IoT) systems and Point of Care (PoC) devices. In this paper, we provide a tutorial describing how various technologies ranging from emerging memristive devices, to established Field Programmable Gate Arrays (FPGAs), and mature Complementary Metal Oxide Semiconductor (CMOS) technology can be used to develop efficient DL accelerators to solve a wide variety of diagnostic, pattern recognition, and signal processing problems in healthcare. Furthermore, we explore how spiking neuromorphic processors can complement their DL counterparts for processing biomedical signals. After providing the required background, we unify the sparsely distributed research on neural network and neuromorphic hardware implementations as applied to the healthcare domain. In addition, we benchmark various hardware platforms by performing a biomedical electromyography (EMG) signal processing task and drawing comparisons among them in terms of inference delay and energy. Finally, we provide our analysis of the field and share a perspective on the advantages, disadvantages, challenges, and opportunities that different accelerators and neuromorphic processors introduce to healthcare and biomedical domains. This paper can serve a large audience, ranging from nanoelectronics researchers, to biomedical and healthcare practitioners in grasping the fundamental interplay between hardware, algorithms, and clinical adoption of these tools, as we shed light on the future of deep networks and spiking neuromorphic processing systems as proponents for driving biomedical circuits and systems forward.Comment: Submitted to IEEE Transactions on Biomedical Circuits and Systems (21 pages, 10 figures, 5 tables

    이쒅 μžμ—°μ–΄ 처리 λͺ¨λΈμ„ μœ„ν•œ ν™•μž₯ν˜• 컴퓨터 μ‹œμŠ€ν…œ 섀계

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·정보곡학뢀, 2021. 2. κΉ€μž₯우.Modern neural-network (NN) accelerators have been successful by accelerating a small number of basic operations (e.g., convolution, fully-connected, feedback) comprising the specific target neural-network models (e.g., CNN, RNN). However, this approach no longer works for the emerging full-scale natural language processing (NLP)-based neural network models (e.g., Memory networks, Transformer, BERT), which consist of different combinations of complex and heterogeneous operations (e.g., self-attention, multi-head attention, large-scale feed-forward). Existing acceleration proposals cover only the proposal-specific basic operations and/or customize them for specific models only, which leads to the low performance improvement and the narrow model coverage. Therefore, an ideal NLP accelerator should first identify all performance-critical operations required by different NLP models and support them as a single accelerator to achieve a high model coverage, and can adaptively optimize its architecture to achieve the best performance for the given model. To address these scalability and model/config diversity issues, the dissertation introduces two novel projects (i.e., MnnFast and NLP-Fast) to efficiently accelerate a wide spectrum of full-scale NLP models. First, MnnFast proposes three novel optimizations to resolve three major performance problems (i.e., high memory bandwidth, heavy computation, and cache contention) in memory-augmented neural networks. Next, NLP-Fast adopts three optimization techniques to resolve the huge performance variation due to the model/config diversity in emerging NLP models. We implement both MnnFast and NLP-Fast on different hardware platforms (i.e., CPU, GPU, FPGA) and thoroughly evaluate their performance improvement on each platform.μžμ—°μ–΄ 처리의 μ€‘μš”μ„±μ΄ λŒ€λ‘λ¨μ— 따라 μ—¬λŸ¬ κΈ°μ—… 및 연ꡬ진듀은 λ‹€μ–‘ν•˜κ³  λ³΅μž‘ν•œ μ’…λ₯˜μ˜ μžμ—°μ–΄ 처리 λͺ¨λΈλ“€μ„ μ œμ‹œν•˜κ³  μžˆλ‹€. 즉 μžμ—°μ–΄ 처리 λͺ¨λΈλ“€μ€ ν˜•νƒœκ°€ λ³΅μž‘ν•΄μ§€κ³ ,둜규λͺ¨κ°€ 컀지며, μ’…λ₯˜κ°€ λ‹€μ–‘ν•΄μ§€λŠ” 양상을 보여쀀닀. λ³Έ ν•™μœ„λ…Όλ¬Έμ€ μ΄λŸ¬ν•œ μžμ—°μ–΄ 처리 λͺ¨λΈμ˜ λ³΅μž‘μ„±, ν™•μž₯μ„±, 닀양성을 ν•΄κ²°ν•˜κΈ° μœ„ν•΄ μ—¬λŸ¬ 핡심 아이디어λ₯Ό μ œμ‹œν•˜μ˜€λ‹€. 각각의 핡심 아이디어듀은 λ‹€μŒκ³Ό κ°™λ‹€. (1) λ‹€μ–‘ν•œ μ’…λ₯˜μ˜ μžμ—°μ–΄ 처리 λͺ¨λΈμ˜ μ„±λŠ₯ μ˜€λ²„ν—€λ“œ 뢄포도λ₯Ό μ•Œμ•„λ‚΄κΈ° μœ„ν•œ 정적/동적 뢄석을 μˆ˜ν–‰ν•œλ‹€. (2) μ„±λŠ₯ 뢄석을 톡해 μ•Œμ•„λ‚Έ 주된 μ„±λŠ₯ 병λͺ© μš”μ†Œλ“€μ˜ λ©”λͺ¨λ¦¬ μ‚¬μš©μ„ μ΅œμ ν™” ν•˜κΈ° μœ„ν•œ 전체둠적 λͺ¨λΈ 병렬화 κΈ°μˆ μ„ μ œμ‹œν•œλ‹€. (3) μ—¬λŸ¬ μ—°μ‚°λ“€μ˜ μ—°μ‚°λŸ‰μ„ κ°μ†Œν•˜λŠ” 기술과 μ—°μ‚°λŸ‰ κ°μ†Œλ‘œ μΈν•œ skewness 문제λ₯Ό ν•΄κ²°ν•˜κΈ° μœ„ν•œ dynamic scheduler κΈ°μˆ μ„ μ œμ‹œν•œλ‹€. (4) ν˜„ μžμ—°μ–΄ 처리 λͺ¨λΈμ˜ μ„±λŠ₯ 닀양성을 ν•΄κ²°ν•˜κΈ° μœ„ν•΄ 각 λͺ¨λΈμ— μ΅œμ ν™”λœ λ””μžμΈμ„ μ œμ‹œν•˜λŠ” κΈ°μˆ μ„ μ œμ‹œν•œλ‹€. μ΄λŸ¬ν•œ 핡심 κΈ°μˆ λ“€μ€ μ—¬λŸ¬ μ’…λ₯˜μ˜ ν•˜λ“œμ›¨μ–΄ 가속기 (예: CPU, GPU, FPGA, ASIC) 에도 λ²”μš©μ μœΌλ‘œ μ‚¬μš©λ  수 있기 λ•Œλ¬Έμ— 맀우 νš¨κ³Όμ μ΄λ―€λ‘œ, μ œμ‹œλœ κΈ°μˆ λ“€μ€ μžμ—°μ–΄ 처리 λͺ¨λΈμ„ μœ„ν•œ 컴퓨터 μ‹œμŠ€ν…œ 섀계 뢄야에 κ΄‘λ²”μœ„ν•˜κ²Œ 적용될 수 μžˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” ν•΄λ‹Ή κΈ°μˆ λ“€μ„ μ μš©ν•˜μ—¬ CPU, GPU, FPGA 각각의 ν™˜κ²½μ—μ„œ, μ œμ‹œλœ κΈ°μˆ λ“€μ΄ λͺ¨λ‘ μœ μ˜λ―Έν•œ μ„±λŠ₯ν–₯상을 달성함을 보여쀀닀.1 INTRODUCTION 1 2 Background 6 2.1 Memory Networks 6 2.2 Deep Learning for NLP 9 3 A Fast and Scalable System Architecture for Memory-Augmented Neural Networks 14 3.1 Motivation & Design Goals 14 3.1.1 Performance Problems in MemNN - High Off-chip Memory Bandwidth Requirements 15 3.1.2 Performance Problems in MemNN - High Computation 16 3.1.3 Performance Problems in MemNN - Shared Cache Contention 17 3.1.4 Design Goals 18 3.2 MnnFast 19 3.2.1 Column-Based Algorithm 19 3.2.2 Zero Skipping 22 3.2.3 Embedding Cache 25 3.3 Implementation 26 3.3.1 General-Purpose Architecture - CPU 26 3.3.2 General-Purpose Architecture - GPU 28 3.3.3 Custom Hardware (FPGA) 29 3.4 Evaluation 31 3.4.1 Experimental Setup 31 3.4.2 CPU 33 3.4.3 GPU 35 3.4.4 FPGA 37 3.4.5 Comparison Between CPU and FPGA 39 3.5 Conclusion 39 4 A Fast, Scalable, and Flexible System for Large-Scale Heterogeneous NLP Models 40 4.1 Motivation & Design Goals 40 4.1.1 High Model Complexity 40 4.1.2 High Memory Bandwidth 41 4.1.3 Heavy Computation 42 4.1.4 Huge Performance Variation 43 4.1.5 Design Goals 43 4.2 NLP-Fast 44 4.2.1 Bottleneck Analysis of NLP Models 44 4.2.2 Holistic Model Partitioning 47 4.2.3 Cross-operation Zero Skipping 51 4.2.4 Adaptive Hardware Reconfiguration 54 4.3 NLP-Fast Toolkit 56 4.4 Implementation 59 4.4.1 General-Purpose Architecture - CPU 59 4.4.2 General-Purpose Architecture - GPU 61 4.4.3 Custom Hardware (FPGA) 62 4.5 Evaluation 64 4.5.1 Experimental Setup 65 4.5.2 CPU 65 4.5.3 GPU 67 4.5.4 FPGA 69 4.6 Conclusion 72 5 Related Work 73 5.1 Various DNN Accelerators 73 5.2 Various NLP Accelerators 74 5.3 Model Partitioning 75 5.4 Approximation 76 5.5 Improving Flexibility 78 5.6 Resource Optimization 78 6 Conclusion 80 Abstract (In Korean) 106Docto
    • …
    corecore