144 research outputs found

    Interstellar: Using Halide's Scheduling Language to Analyze DNN Accelerators

    Full text link
    We show that DNN accelerator micro-architectures and their program mappings represent specific choices of loop order and hardware parallelism for computing the seven nested loops of DNNs, which enables us to create a formal taxonomy of all existing dense DNN accelerators. Surprisingly, the loop transformations needed to create these hardware variants can be precisely and concisely represented by Halide's scheduling language. By modifying the Halide compiler to generate hardware, we create a system that can fairly compare these prior accelerators. As long as proper loop blocking schemes are used, and the hardware can support mapping replicated loops, many different hardware dataflows yield similar energy efficiency with good performance. This is because the loop blocking can ensure that most data references stay on-chip with good locality and the processing units have high resource utilization. How resources are allocated, especially in the memory system, has a large impact on energy and performance. By optimizing hardware resource allocation while keeping throughput constant, we achieve up to 4.2X energy improvement for Convolutional Neural Networks (CNNs), 1.6X and 1.8X improvement for Long Short-Term Memories (LSTMs) and multi-layer perceptrons (MLPs), respectively.Comment: Published as a conference paper at ASPLOS 202

    AutoAccel: Automated Accelerator Generation and Optimization with Composable, Parallel and Pipeline Architecture

    Full text link
    CPU-FPGA heterogeneous architectures are attracting ever-increasing attention in an attempt to advance computational capabilities and energy efficiency in today's datacenters. These architectures provide programmers with the ability to reprogram the FPGAs for flexible acceleration of many workloads. Nonetheless, this advantage is often overshadowed by the poor programmability of FPGAs whose programming is conventionally a RTL design practice. Although recent advances in high-level synthesis (HLS) significantly improve the FPGA programmability, it still leaves programmers facing the challenge of identifying the optimal design configuration in a tremendous design space. This paper aims to address this challenge and pave the path from software programs towards high-quality FPGA accelerators. Specifically, we first propose the composable, parallel and pipeline (CPP) microarchitecture as a template of accelerator designs. Such a well-defined template is able to support efficient accelerator designs for a broad class of computation kernels, and more importantly, drastically reduce the design space. Also, we introduce an analytical model to capture the performance and resource trade-offs among different design configurations of the CPP microarchitecture, which lays the foundation for fast design space exploration. On top of the CPP microarchitecture and its analytical model, we develop the AutoAccel framework to make the entire accelerator generation automated. AutoAccel accepts a software program as an input and performs a series of code transformations based on the result of the analytical-model-based design space exploration to construct the desired CPP microarchitecture. Our experiments show that the AutoAccel-generated accelerators outperform their corresponding software implementations by an average of 72x for a broad class of computation kernels

    λ©”λͺ¨λ¦¬ 집약적 μ—°μ‚° 가속화λ₯Ό μœ„ν•΄ λ§žμΆ€ν™”λœ DNN 가속기 및 λ‘œλ“œ λ°ΈλŸ°μ‹± 기술

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : μœ΅ν•©κ³Όν•™κΈ°μˆ λŒ€ν•™μ› μœ΅ν•©κ³Όν•™λΆ€(지λŠ₯ν˜•μœ΅ν•©μ‹œμŠ€ν…œμ „κ³΅), 2022. 8. μ•ˆμ •ν˜Έ.λ”₯ λ‰΄λŸ΄ λ„€νŠΈμ›Œν¬(DNN)λŠ” 인간에 κ·Όμ ‘ν•œ 인식 정확도λ₯Ό ν† λŒ€λ‘œ 이미지 λΆ„λ₯˜, μžμ—°μ–΄ 처리, μŒμ„± 인식과 같은 λ‹€μ–‘ν•œ λΆ„μ•Όμ—μ„œ μ‚¬μš©λœλ‹€. DNN의 κ³„μ†λœ λ°œμ „μœΌ 둜 인해, DNNμ—μ„œ κ°€μž₯ λ§Žμ€ μ—°μ‚°λŸ‰μ„ μš”κ΅¬ν•˜λŠ” μ»¨λ³Όλ£¨μ…˜κ³Ό ν–‰λ ¬ κ³±μ…ˆ(GEMM) 을 μ „μš©μœΌλ‘œ μ²˜λ¦¬ν•˜λŠ” 가속기듀이 μΆœμ‹œλ˜μ—ˆλ‹€. ν•˜μ§€λ§Œ, μ»΄ν“¨νŒ… 집약적인 연산듀을 κ°€μ†ν•˜λŠ”λ°μ—λ§Œ μΉ˜μ€‘λœ 가속기 연ꡬ λ°©ν–₯으둜 인해, μ΄μ „μ—λŠ” 잘 보이지 μ•Šμ•˜λ˜ λ©”λͺ¨λ¦¬ 집약적인 μ—°μ‚°λ“€μ˜ μˆ˜ν–‰ μ‹œκ°„ 비쀑이 μ¦κ°€ν•˜μ˜€λ‹€. μ»¨λ³Όλ£¨μ…˜ λ‰΄λŸ΄ λ„€νŠΈμ›Œν¬ μΆ”λ‘ (CNN inference)μ—μ„œ, μ»¨λ³Όλ£¨μ…˜μ˜ μ—°μ‚° λΉ„μš©μ„ 쀄이기 μœ„ν•΄ μ΅œμ‹  CNN λͺ¨λΈλ“€μ€ κΉŠμ΄λ°©μ‹μ˜ μ»¨λ³Όλ£¨μ…˜(depth-wise convolution, DW-CONV)κ³Ό μŠ€ν€΄μ¦ˆ-μ—‘μ‚¬μ΄ν…Œμ΄μ…˜(squeeze-and-excitation, SE)을 μ±„νƒν•œ λ‹€. κ·ΈλŸ¬λ‚˜, 기쑴의 CNN κ°€μ†κΈ°λŠ” μ»΄ν“¨νŒ… 집약적인 ν‘œμ€€ μ»¨λ³Όλ£¨μ…˜ 계측에 졜적 ν™”λ˜μ—ˆκΈ° λ•Œλ¬Έμ—, 데이터 μž¬μ‚¬μš©μ΄ μ œν•œλœ DW-CONV 및 SEλŠ” μ—°μ‚°μ˜ νš¨μœ¨μ„±μ„ λ–¨μ–΄λœ¨λ¦°λ‹€. λ”°λΌμ„œ, DW-CONV 및 SE의 μ—°μ‚°λŸ‰μ€ 전체 μ—°μ‚°μ˜ 10%만 μ°¨μ§€ν•˜μ§€ 만 μ‹œμŠ€ν† λ¦­ μ–΄λ ˆμ΄(systolic-array) 기반의 κ°€μ†κΈ°μ—μ„œ λ©”λͺ¨λ¦¬ λŒ€μ—­ν­μ˜ 병λͺ©μœΌλ‘œ 인해 처리 μ‹œκ°„μ˜ 60% 이상을 μ†ŒλΉ„ν•œλ‹€. 트랜슀포머 ν•™μŠ΅(transformer training)μ—μ„œ, GEMM의 μˆ˜ν–‰μ‹œκ°„μ΄ μƒλŒ€μ  으둜 κ°μ†Œν•¨μ— 따라 μ†Œν”„νŠΈλ§₯슀(softmax), λ ˆμ΄μ–΄ μ •κ·œν™”(layer normalization), GeLU, μ»¨ν…μŠ€νŠΈ(context), μ–΄ν…μ…˜(attention)κ³Ό 같은 λ©”λͺ¨λ¦¬ 집약적인 μ—°μ‚°λ“€μ˜ μˆ˜ν–‰ μ‹œκ°„ 비쀑이 μ¦κ°€ν•˜μ˜€λ‹€. 특히, μž…λ ₯ λ°μ΄ν„°μ˜ μ‹œν€€μŠ€ 길이(sequence length) κ°€ μ¦κ°€ν•˜λŠ” μ΅œμ‹ μ˜ 트랜슀포머 μΆ”μ„Έλ‘œ 인해 μ‹œν€€μŠ€ 길이에 따라 데이터 크기가 μ œκ³±λ°°κ°€ λ˜λŠ” μ†Œν”„νŠΈλ§₯슀, μ»¨ν…μŠ€νŠΈ(context), μ–΄ν…μ…˜(attention) λ ˆμ΄μ–΄λ“€μ˜ 영 ν–₯도가 컀진닀. λ”°λΌμ„œ, λ©”λͺ¨λ¦¬ 집약적인 νŠΉμ„±μ„ 가진 연산듀이 μ΅œλŒ€ 80%의 μˆ˜ν–‰ μ‹œκ°„μ„ μ°¨μ§€ν•œλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œ, μš°λ¦¬λŠ” CNN을 κ°€μ†ν•˜κΈ° μœ„ν•΄ μ‹œμŠ€ν† λ¦­ μ–΄λ ˆμ΄ 기반 μ•„ν‚€ν…μ²˜ μœ„μ— μž‘μ€ μ˜μ—­ μ˜€λ²„ν—€λ“œλ‘œ μ»΄ν“¨νŒ… 및 λ©”λͺ¨λ¦¬ 집약적 μž‘μ—…μ„ λͺ¨λ‘ 효율적으둜 처 λ¦¬ν•˜λŠ” μ—°μ‚° μœ λ‹›μ„ μΆ”κ°€ν•œ MVP μ•„ν‚€ν…μ²˜λ₯Ό μ œμ•ˆν•œλ‹€. μš°λ¦¬λŠ” 높은 λ©”λͺ¨λ¦¬ λŒ€μ—­ 폭 μš”κ΅¬ 사항을 μΆ©μ‘±ν•˜κΈ° μœ„ν•΄ κ³±μ…ˆκΈ°, λ§μ…ˆ 트리(adder tree), λ‹€μ€‘μ˜ 닀쀑-뱅크 버퍼λ₯Ό ν¬ν•¨ν•œ DW-CONV μ²˜λ¦¬μ— λ§žμΆ€ν™”λœ 벑터 μœ λ‹›(vector unit)을 μ œμ•ˆν•œλ‹€. λ˜ν•œ, μš°λ¦¬λŠ” μ‹œμŠ€ν† λ¦­ μ–΄λ ˆμ΄μ—μ„œ μ‚¬μš©ν•˜λŠ” 톡합 버퍼λ₯Ό ν™•μž₯ν•˜μ—¬ SE와 같은 μš” μ†Œλ‹¨μœ„(element-wise) 연산을 λ’€λ”°λ₯΄λŠ” CONV와 νŒŒμ΄ν”„λΌμΈ(pipeline) λ°©μ‹μœΌλ‘œ μ²˜λ¦¬ν•˜λŠ” ν”„λ‘œμ„Έμ‹±-λ‹ˆμ–΄-λ©”λͺ¨λ¦¬ μœ λ‹›(processing-near-memory-unit, PNMU) 을 μ œμ•ˆν•œλ‹€. MVP κ΅¬μ‘°λŠ” 베이슀라인(baseline) μ‹œμŠ€ν† λ¦­ μ–΄λ ˆμ΄ μ•„ν‚€ν…μ²˜μ— λΉ„ν•΄ 9%의 면적 μ˜€λ²„ν—€λ“œλ§Œμ„ μ΄μš©ν•˜μ—¬ EfficientNet-B0/B4/B7, MnasNet 및 MobileNet-V1/V2에 λŒ€ν•΄ μ„±λŠ₯을 평균 2.6λ°° ν–₯μƒν•˜κ³  μ—λ„ˆμ§€ μ†Œλͺ¨λŸ‰μ„ 47% 쀄인닀. 그리고, μš°λ¦¬λŠ” 트랜슀포머 ν•™μŠ΅ 가속을 μœ„ν•΄ DNN 가속기 내에 μ‘΄μž¬ν•˜λŠ” μ—¬λŸ¬ 개의 μ—°μ‚° μœ λ‹›λ“€μ„ ν΄λŸ¬μŠ€ν„°(cluster) λ‹¨μœ„λ‘œ λΆ„ν• ν•˜λŠ” κΈ°μˆ λ“€μ„ μ œμ•ˆν•œλ‹€. νŠΈλž˜ν”½ μ„±ν˜•(traffic shaping)은 ν΄λŸ¬μŠ€ν„°λ“€μ„ 비동기 λ°©μ‹μœΌλ‘œ μˆ˜ν–‰μ‹œμΌœ DRAM λŒ€μ—­ν­μ˜ μΆœλ μž„μ„ μ™„ν™”μ‹œν‚¨λ‹€. μžμ› 곡유(resource sharing)λŠ” μ»΄ν“¨νŒ… 집약적인 μ—°μ‚°κ³Ό λ©”λͺ¨λ¦¬ 집약적인 연산이 μ„œλ‘œ λ‹€λ₯Έ ν΄λŸ¬μŠ€ν„°μ—μ„œ λ™μ‹œμ— μˆ˜ν–‰λ  λ•Œ λͺ¨λ“  ν΄λŸ¬μŠ€ν„°μ˜ 맀트릭슀 μœ λ‹›κ³Ό 벑터 μœ λ‹›μ„ λ™μ‹œμ— μˆ˜ν–‰ μ‹œμΌœ μ»΄ν“¨νŒ… 집약적인 μ—°μ‚°μ˜ μˆ˜ν–‰ μ‹œκ°„μ„ 쀄인닀. νŠΈλž˜ν”½ μ„±ν˜•κ³Ό μžμ› 곡유λ₯Ό μ μš©ν•˜μ—¬ BERT-Large ν•™μŠ΅ μˆ˜ν–‰ μ‹œ 1.27배의 μ„±λŠ₯을 ν–₯μƒμ‹œν‚¨λ‹€.Deep neural networks (DNNs) are used in various fields, such as in image classification, natural language processing, and speech recognition based on high recognition accuracy that approximates that of humans. Due to the continuous development of DNNs, a large body of accelerators have been introduced to process convolution (CONV) and general matrix multiplication (GEMM) operations, which account for the greatest level of computational demand. However, in the line of accelerator research focused on accelerating compute-intensive operations, the execution time of memory-intensive operations has increased more than it did in the past. In convolutional neural network (CNN) inference, recent CNN models adopt depth-wise CONV (DW-CONV) and Squeeze-and-Excitation (SE) to reduce the computational costs of CONV. However, existing area-efficient CNN accelerators are sub-optimal for these latest CNN models because they were mainly optimized for compute-intensive standard CONV layers with abundant data reuse that can be pipelined with activation and normalization operations. In contrast, DW-CONV and SE are memory-intensive with limited data reuse. The latter also strongly depends on the nearby CONV layers, making an effective pipelining a daunting task. Therefore, DW-CONV and SE only occupy 10% of entire operations but become memory bandwidth bound, spending more than 60% of the processing time in systolic-array-based accelerators. During the transformer training process, the execution times of memoryintensive operations such as softmax, layer normalization, GeLU, context, and attention layer increased because conventional accelerators improved their computational performance capabilities dramatically. In addition, with the latest trend toward increasing the sequence length, the softmax, context, and attention layers have much more of an influence as their data sizes have increased quadratically. Thus, these layers take up to 80% of the execution time. In this thesis, we propose a CNN acceleration architecture called MVP, which efficiently processes both compute- and memory-intensive operations with a small area overhead on top of the baseline systolic-array-based architecture. We suggest a specialized vector unit tailored for processing DWCONV, including multipliers, adder trees, and multi-banked buffers to meet the high memory bandwidth requirement. We augment the unified buffer with tiny processing elements to smoothly pipeline SE with the subsequent CONV, enabling concurrent processing of DW-CONV with standard CONV, thereby achieving the maximum utilization of arithmetic units. Our evaluation shows that MVP improves performance by 2.6Γ— and reduces energy consumption by 47% on average for EfficientNet-B0/B4/B7, MnasNet, and MobileNet-V1/V2 with only a 9% area overhead compared to the baseline. Then, we propose load balancing techniques that partition multiple processing element tiles inside a DNN accelerator for transformer training acceleration. Traffic shaping alleviates temporal fluctuations in the DRAM bandwidth by handling multiple processing element tiles within a cluster in a synchronous manner but running different clusters asynchronously. Resource sharing reduces the execution time of compute-intensive operations by simultaneously executing the matrix units and vector units of all clusters. Our evaluation shows that traffic shaping and resource sharing improve the performance by up to 1.27Γ— for BERT-Large training.1 Introduction 1 1.1 Accelerating Depth-wise Convolution on Edge Device 3 1.2 Accelerating Transformer Models in Training 6 1.3 Research Contributions 10 1.4 Outline 11 2 Background and Motivation 12 2.1 CNN background and trends 12 2.1.1 Various types of convolution (CONV) operations 12 2.1.2 Trends in CNN model architecture 14 2.1.3 EfficientNet: A state-of-the-art CNN model 17 2.2 Transformer background and trends 20 2.2.1 Bidirectional encoder representations from transformers (BERT) 20 2.2.2 Trends in training transformer models 21 2.3 Baseline DNN acceleration architecture 23 2.4 Motivation 25 2.4.1 Challenges of computing memory-intensive CNN layers 25 2.4.2 Opportunity for load balancing in BERT training 28 3 DNN accelerator tailored for accelerating memory-intensive operations 32 4 MVP: A CNN accelerator with Matrix, Vector, and Processing-near-memory units 35 4.1 Contribution 35 4.1.1 MVP organization 35 4.1.2 How depth-wise processing element (DWPE) operates 38 4.1.3 How processing-near-memory unit (PNMU) operates 41 4.1.4 Overlapping the operation of DW-CONV with PW-CONV 42 4.1.5 Considerations for designing DWIB 44 4.2 Evaluation 45 4.2.1 Experimental setup 46 4.2.2 Performance and energy evaluation 47 4.2.3 Comparing MVP with NVDLA 52 4.2.4 Exploring the design space of MVP architecture 54 4.2.5 Evaluating MVP with various SysAr configurations 57 4.3 Related Work 57 5 Load Balancing Techniques for BERT Training 61 5.1 Contribution 61 5.1.1 Tiled architecture 61 5.1.2 DRAM traffic shaping 64 5.1.3 Resource sharing 66 5.2 Evaluation 68 5.2.1 Experimental setup 68 5.2.2 Performance evaluation 69 6 Discussion 73 7 Conclusion 78λ°•

    HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array

    Get PDF
    With the rise of artificial intelligence in recent years, Deep Neural Networks (DNNs) have been widely used in many domains. To achieve high performance and energy efficiency, hardware acceleration (especially inference) of DNNs is intensively studied both in academia and industry. However, we still face two challenges: large DNN models and datasets, which incur frequent off-chip memory accesses; and the training of DNNs, which is not well-explored in recent accelerator designs. To truly provide high throughput and energy efficient acceleration for the training of deep and large models, we inevitably need to use multiple accelerators to explore the coarse-grain parallelism, compared to the fine-grain parallelism inside a layer considered in most of the existing architectures. It poses the key research question to seek the best organization of computation and dataflow among accelerators. In this paper, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators. HyPar partitions the feature map tensors (input and output), the kernel tensors, the gradient tensors, and the error tensors for the DNN accelerators. A partition constitutes the choice of parallelism for weighted layers. The optimization target is to search a partition that minimizes the total communication during training a complete DNN. To solve this problem, we propose a communication model to explain the source and amount of communications. Then, we use a hierarchical layer-wise dynamic programming method to search for the partition for each layer.Comment: To appear in the 2019 25th International Symposium on High-Performance Computer Architecture (HPCA 2019

    Chameleon: a heterogeneous and disaggregated accelerator system for retrieval-augmented language models

    Full text link
    A Retrieval-Augmented Language Model (RALM) augments a generative language model by retrieving context-specific knowledge from an external database. This strategy facilitates impressive text generation quality even with smaller models, thus reducing orders of magnitude of computational demands. However, RALMs introduce unique system design challenges due to (a) the diverse workload characteristics between LM inference and retrieval and (b) the various system requirements and bottlenecks for different RALM configurations such as model sizes, database sizes, and retrieval frequencies. We propose Chameleon, a heterogeneous accelerator system that integrates both LM and retrieval accelerators in a disaggregated architecture. The heterogeneity ensures efficient acceleration of both LM inference and retrieval, while the accelerator disaggregation enables the system to independently scale both types of accelerators to fulfill diverse RALM requirements. Our Chameleon prototype implements retrieval accelerators on FPGAs and assigns LM inference to GPUs, with a CPU server orchestrating these accelerators over the network. Compared to CPU-based and CPU-GPU vector search systems, Chameleon achieves up to 23.72x speedup and 26.2x energy efficiency. Evaluated on various RALMs, Chameleon exhibits up to 2.16x reduction in latency and 3.18x speedup in throughput compared to the hybrid CPU-GPU architecture. These promising results pave the way for bringing accelerator heterogeneity and disaggregation into future RALM systems
    • …
    corecore