33,343 research outputs found

    λ©”λͺ¨λ¦¬ 집약적 μ—°μ‚° 가속화λ₯Ό μœ„ν•΄ λ§žμΆ€ν™”λœ DNN 가속기 및 λ‘œλ“œ λ°ΈλŸ°μ‹± 기술

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : μœ΅ν•©κ³Όν•™κΈ°μˆ λŒ€ν•™μ› μœ΅ν•©κ³Όν•™λΆ€(지λŠ₯ν˜•μœ΅ν•©μ‹œμŠ€ν…œμ „κ³΅), 2022. 8. μ•ˆμ •ν˜Έ.λ”₯ λ‰΄λŸ΄ λ„€νŠΈμ›Œν¬(DNN)λŠ” 인간에 κ·Όμ ‘ν•œ 인식 정확도λ₯Ό ν† λŒ€λ‘œ 이미지 λΆ„λ₯˜, μžμ—°μ–΄ 처리, μŒμ„± 인식과 같은 λ‹€μ–‘ν•œ λΆ„μ•Όμ—μ„œ μ‚¬μš©λœλ‹€. DNN의 κ³„μ†λœ λ°œμ „μœΌ 둜 인해, DNNμ—μ„œ κ°€μž₯ λ§Žμ€ μ—°μ‚°λŸ‰μ„ μš”κ΅¬ν•˜λŠ” μ»¨λ³Όλ£¨μ…˜κ³Ό ν–‰λ ¬ κ³±μ…ˆ(GEMM) 을 μ „μš©μœΌλ‘œ μ²˜λ¦¬ν•˜λŠ” 가속기듀이 μΆœμ‹œλ˜μ—ˆλ‹€. ν•˜μ§€λ§Œ, μ»΄ν“¨νŒ… 집약적인 연산듀을 κ°€μ†ν•˜λŠ”λ°μ—λ§Œ μΉ˜μ€‘λœ 가속기 연ꡬ λ°©ν–₯으둜 인해, μ΄μ „μ—λŠ” 잘 보이지 μ•Šμ•˜λ˜ λ©”λͺ¨λ¦¬ 집약적인 μ—°μ‚°λ“€μ˜ μˆ˜ν–‰ μ‹œκ°„ 비쀑이 μ¦κ°€ν•˜μ˜€λ‹€. μ»¨λ³Όλ£¨μ…˜ λ‰΄λŸ΄ λ„€νŠΈμ›Œν¬ μΆ”λ‘ (CNN inference)μ—μ„œ, μ»¨λ³Όλ£¨μ…˜μ˜ μ—°μ‚° λΉ„μš©μ„ 쀄이기 μœ„ν•΄ μ΅œμ‹  CNN λͺ¨λΈλ“€μ€ κΉŠμ΄λ°©μ‹μ˜ μ»¨λ³Όλ£¨μ…˜(depth-wise convolution, DW-CONV)κ³Ό μŠ€ν€΄μ¦ˆ-μ—‘μ‚¬μ΄ν…Œμ΄μ…˜(squeeze-and-excitation, SE)을 μ±„νƒν•œ λ‹€. κ·ΈλŸ¬λ‚˜, 기쑴의 CNN κ°€μ†κΈ°λŠ” μ»΄ν“¨νŒ… 집약적인 ν‘œμ€€ μ»¨λ³Όλ£¨μ…˜ 계측에 졜적 ν™”λ˜μ—ˆκΈ° λ•Œλ¬Έμ—, 데이터 μž¬μ‚¬μš©μ΄ μ œν•œλœ DW-CONV 및 SEλŠ” μ—°μ‚°μ˜ νš¨μœ¨μ„±μ„ λ–¨μ–΄λœ¨λ¦°λ‹€. λ”°λΌμ„œ, DW-CONV 및 SE의 μ—°μ‚°λŸ‰μ€ 전체 μ—°μ‚°μ˜ 10%만 μ°¨μ§€ν•˜μ§€ 만 μ‹œμŠ€ν† λ¦­ μ–΄λ ˆμ΄(systolic-array) 기반의 κ°€μ†κΈ°μ—μ„œ λ©”λͺ¨λ¦¬ λŒ€μ—­ν­μ˜ 병λͺ©μœΌλ‘œ 인해 처리 μ‹œκ°„μ˜ 60% 이상을 μ†ŒλΉ„ν•œλ‹€. 트랜슀포머 ν•™μŠ΅(transformer training)μ—μ„œ, GEMM의 μˆ˜ν–‰μ‹œκ°„μ΄ μƒλŒ€μ  으둜 κ°μ†Œν•¨μ— 따라 μ†Œν”„νŠΈλ§₯슀(softmax), λ ˆμ΄μ–΄ μ •κ·œν™”(layer normalization), GeLU, μ»¨ν…μŠ€νŠΈ(context), μ–΄ν…μ…˜(attention)κ³Ό 같은 λ©”λͺ¨λ¦¬ 집약적인 μ—°μ‚°λ“€μ˜ μˆ˜ν–‰ μ‹œκ°„ 비쀑이 μ¦κ°€ν•˜μ˜€λ‹€. 특히, μž…λ ₯ λ°μ΄ν„°μ˜ μ‹œν€€μŠ€ 길이(sequence length) κ°€ μ¦κ°€ν•˜λŠ” μ΅œμ‹ μ˜ 트랜슀포머 μΆ”μ„Έλ‘œ 인해 μ‹œν€€μŠ€ 길이에 따라 데이터 크기가 μ œκ³±λ°°κ°€ λ˜λŠ” μ†Œν”„νŠΈλ§₯슀, μ»¨ν…μŠ€νŠΈ(context), μ–΄ν…μ…˜(attention) λ ˆμ΄μ–΄λ“€μ˜ 영 ν–₯도가 컀진닀. λ”°λΌμ„œ, λ©”λͺ¨λ¦¬ 집약적인 νŠΉμ„±μ„ 가진 연산듀이 μ΅œλŒ€ 80%의 μˆ˜ν–‰ μ‹œκ°„μ„ μ°¨μ§€ν•œλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œ, μš°λ¦¬λŠ” CNN을 κ°€μ†ν•˜κΈ° μœ„ν•΄ μ‹œμŠ€ν† λ¦­ μ–΄λ ˆμ΄ 기반 μ•„ν‚€ν…μ²˜ μœ„μ— μž‘μ€ μ˜μ—­ μ˜€λ²„ν—€λ“œλ‘œ μ»΄ν“¨νŒ… 및 λ©”λͺ¨λ¦¬ 집약적 μž‘μ—…μ„ λͺ¨λ‘ 효율적으둜 처 λ¦¬ν•˜λŠ” μ—°μ‚° μœ λ‹›μ„ μΆ”κ°€ν•œ MVP μ•„ν‚€ν…μ²˜λ₯Ό μ œμ•ˆν•œλ‹€. μš°λ¦¬λŠ” 높은 λ©”λͺ¨λ¦¬ λŒ€μ—­ 폭 μš”κ΅¬ 사항을 μΆ©μ‘±ν•˜κΈ° μœ„ν•΄ κ³±μ…ˆκΈ°, λ§μ…ˆ 트리(adder tree), λ‹€μ€‘μ˜ 닀쀑-뱅크 버퍼λ₯Ό ν¬ν•¨ν•œ DW-CONV μ²˜λ¦¬μ— λ§žμΆ€ν™”λœ 벑터 μœ λ‹›(vector unit)을 μ œμ•ˆν•œλ‹€. λ˜ν•œ, μš°λ¦¬λŠ” μ‹œμŠ€ν† λ¦­ μ–΄λ ˆμ΄μ—μ„œ μ‚¬μš©ν•˜λŠ” 톡합 버퍼λ₯Ό ν™•μž₯ν•˜μ—¬ SE와 같은 μš” μ†Œλ‹¨μœ„(element-wise) 연산을 λ’€λ”°λ₯΄λŠ” CONV와 νŒŒμ΄ν”„λΌμΈ(pipeline) λ°©μ‹μœΌλ‘œ μ²˜λ¦¬ν•˜λŠ” ν”„λ‘œμ„Έμ‹±-λ‹ˆμ–΄-λ©”λͺ¨λ¦¬ μœ λ‹›(processing-near-memory-unit, PNMU) 을 μ œμ•ˆν•œλ‹€. MVP κ΅¬μ‘°λŠ” 베이슀라인(baseline) μ‹œμŠ€ν† λ¦­ μ–΄λ ˆμ΄ μ•„ν‚€ν…μ²˜μ— λΉ„ν•΄ 9%의 면적 μ˜€λ²„ν—€λ“œλ§Œμ„ μ΄μš©ν•˜μ—¬ EfficientNet-B0/B4/B7, MnasNet 및 MobileNet-V1/V2에 λŒ€ν•΄ μ„±λŠ₯을 평균 2.6λ°° ν–₯μƒν•˜κ³  μ—λ„ˆμ§€ μ†Œλͺ¨λŸ‰μ„ 47% 쀄인닀. 그리고, μš°λ¦¬λŠ” 트랜슀포머 ν•™μŠ΅ 가속을 μœ„ν•΄ DNN 가속기 내에 μ‘΄μž¬ν•˜λŠ” μ—¬λŸ¬ 개의 μ—°μ‚° μœ λ‹›λ“€μ„ ν΄λŸ¬μŠ€ν„°(cluster) λ‹¨μœ„λ‘œ λΆ„ν• ν•˜λŠ” κΈ°μˆ λ“€μ„ μ œμ•ˆν•œλ‹€. νŠΈλž˜ν”½ μ„±ν˜•(traffic shaping)은 ν΄λŸ¬μŠ€ν„°λ“€μ„ 비동기 λ°©μ‹μœΌλ‘œ μˆ˜ν–‰μ‹œμΌœ DRAM λŒ€μ—­ν­μ˜ μΆœλ μž„μ„ μ™„ν™”μ‹œν‚¨λ‹€. μžμ› 곡유(resource sharing)λŠ” μ»΄ν“¨νŒ… 집약적인 μ—°μ‚°κ³Ό λ©”λͺ¨λ¦¬ 집약적인 연산이 μ„œλ‘œ λ‹€λ₯Έ ν΄λŸ¬μŠ€ν„°μ—μ„œ λ™μ‹œμ— μˆ˜ν–‰λ  λ•Œ λͺ¨λ“  ν΄λŸ¬μŠ€ν„°μ˜ 맀트릭슀 μœ λ‹›κ³Ό 벑터 μœ λ‹›μ„ λ™μ‹œμ— μˆ˜ν–‰ μ‹œμΌœ μ»΄ν“¨νŒ… 집약적인 μ—°μ‚°μ˜ μˆ˜ν–‰ μ‹œκ°„μ„ 쀄인닀. νŠΈλž˜ν”½ μ„±ν˜•κ³Ό μžμ› 곡유λ₯Ό μ μš©ν•˜μ—¬ BERT-Large ν•™μŠ΅ μˆ˜ν–‰ μ‹œ 1.27배의 μ„±λŠ₯을 ν–₯μƒμ‹œν‚¨λ‹€.Deep neural networks (DNNs) are used in various fields, such as in image classification, natural language processing, and speech recognition based on high recognition accuracy that approximates that of humans. Due to the continuous development of DNNs, a large body of accelerators have been introduced to process convolution (CONV) and general matrix multiplication (GEMM) operations, which account for the greatest level of computational demand. However, in the line of accelerator research focused on accelerating compute-intensive operations, the execution time of memory-intensive operations has increased more than it did in the past. In convolutional neural network (CNN) inference, recent CNN models adopt depth-wise CONV (DW-CONV) and Squeeze-and-Excitation (SE) to reduce the computational costs of CONV. However, existing area-efficient CNN accelerators are sub-optimal for these latest CNN models because they were mainly optimized for compute-intensive standard CONV layers with abundant data reuse that can be pipelined with activation and normalization operations. In contrast, DW-CONV and SE are memory-intensive with limited data reuse. The latter also strongly depends on the nearby CONV layers, making an effective pipelining a daunting task. Therefore, DW-CONV and SE only occupy 10% of entire operations but become memory bandwidth bound, spending more than 60% of the processing time in systolic-array-based accelerators. During the transformer training process, the execution times of memoryintensive operations such as softmax, layer normalization, GeLU, context, and attention layer increased because conventional accelerators improved their computational performance capabilities dramatically. In addition, with the latest trend toward increasing the sequence length, the softmax, context, and attention layers have much more of an influence as their data sizes have increased quadratically. Thus, these layers take up to 80% of the execution time. In this thesis, we propose a CNN acceleration architecture called MVP, which efficiently processes both compute- and memory-intensive operations with a small area overhead on top of the baseline systolic-array-based architecture. We suggest a specialized vector unit tailored for processing DWCONV, including multipliers, adder trees, and multi-banked buffers to meet the high memory bandwidth requirement. We augment the unified buffer with tiny processing elements to smoothly pipeline SE with the subsequent CONV, enabling concurrent processing of DW-CONV with standard CONV, thereby achieving the maximum utilization of arithmetic units. Our evaluation shows that MVP improves performance by 2.6Γ— and reduces energy consumption by 47% on average for EfficientNet-B0/B4/B7, MnasNet, and MobileNet-V1/V2 with only a 9% area overhead compared to the baseline. Then, we propose load balancing techniques that partition multiple processing element tiles inside a DNN accelerator for transformer training acceleration. Traffic shaping alleviates temporal fluctuations in the DRAM bandwidth by handling multiple processing element tiles within a cluster in a synchronous manner but running different clusters asynchronously. Resource sharing reduces the execution time of compute-intensive operations by simultaneously executing the matrix units and vector units of all clusters. Our evaluation shows that traffic shaping and resource sharing improve the performance by up to 1.27Γ— for BERT-Large training.1 Introduction 1 1.1 Accelerating Depth-wise Convolution on Edge Device 3 1.2 Accelerating Transformer Models in Training 6 1.3 Research Contributions 10 1.4 Outline 11 2 Background and Motivation 12 2.1 CNN background and trends 12 2.1.1 Various types of convolution (CONV) operations 12 2.1.2 Trends in CNN model architecture 14 2.1.3 EfficientNet: A state-of-the-art CNN model 17 2.2 Transformer background and trends 20 2.2.1 Bidirectional encoder representations from transformers (BERT) 20 2.2.2 Trends in training transformer models 21 2.3 Baseline DNN acceleration architecture 23 2.4 Motivation 25 2.4.1 Challenges of computing memory-intensive CNN layers 25 2.4.2 Opportunity for load balancing in BERT training 28 3 DNN accelerator tailored for accelerating memory-intensive operations 32 4 MVP: A CNN accelerator with Matrix, Vector, and Processing-near-memory units 35 4.1 Contribution 35 4.1.1 MVP organization 35 4.1.2 How depth-wise processing element (DWPE) operates 38 4.1.3 How processing-near-memory unit (PNMU) operates 41 4.1.4 Overlapping the operation of DW-CONV with PW-CONV 42 4.1.5 Considerations for designing DWIB 44 4.2 Evaluation 45 4.2.1 Experimental setup 46 4.2.2 Performance and energy evaluation 47 4.2.3 Comparing MVP with NVDLA 52 4.2.4 Exploring the design space of MVP architecture 54 4.2.5 Evaluating MVP with various SysAr configurations 57 4.3 Related Work 57 5 Load Balancing Techniques for BERT Training 61 5.1 Contribution 61 5.1.1 Tiled architecture 61 5.1.2 DRAM traffic shaping 64 5.1.3 Resource sharing 66 5.2 Evaluation 68 5.2.1 Experimental setup 68 5.2.2 Performance evaluation 69 6 Discussion 73 7 Conclusion 78λ°•

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis

    Get PDF
    Traditional data centers are designed with a rigid architecture of fit-for-purpose servers that provision resources beyond the average workload in order to deal with occasional peaks of data. Heterogeneous data centers are pushing towards more cost-efficient architectures with better resource provisioning. In this paper we study the feasibility of using disaggregated architectures for intensive data applications, in contrast to the monolithic approach of server-oriented architectures. Particularly, we have tested a proactive network analysis system in which the workload demands are highly variable. In the context of the dReDBox disaggregated architecture, the results show that the overhead caused by using remote memory resources is significant, between 66\% and 80\%, but we have also observed that the memory usage is one order of magnitude higher for the stress case with respect to average workloads. Therefore, dimensioning memory for the worst case in conventional systems will result in a notable waste of resources. Finally, we found that, for the selected use case, parallelism is limited by memory. Therefore, using a disaggregated architecture will allow for increased parallelism, which, at the same time, will mitigate the overhead caused by remote memory.Comment: 8 pages, 6 figures, 2 tables, 32 references. Pre-print. The paper will be presented during the IEEE International Conference on High Performance Computing and Communications in Bangkok, Thailand. 18 - 20 December, 2017. To be published in the conference proceeding
    • …
    corecore