14,017 research outputs found

    Scaled Quantization for the Vision Transformer

    Full text link
    Quantization using a small number of bits shows promise for reducing latency and memory usage in deep neural networks. However, most quantization methods cannot readily handle complicated functions such as exponential and square root, and prior approaches involve complex training processes that must interact with floating-point values. This paper proposes a robust method for the full integer quantization of vision transformer networks without requiring any intermediate floating-point computations. The quantization techniques can be applied in various hardware or software implementations, including processor/memory architectures and FPGAs.Comment: 9 pages, 0 figur

    Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks

    Full text link
    Adversarial attacks and the development of (deep) neural networks robust against them are currently two widely researched topics. The robustness of Learning Vector Quantization (LVQ) models against adversarial attacks has however not yet been studied to the same extent. We therefore present an extensive evaluation of three LVQ models: Generalized LVQ, Generalized Matrix LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current state-of-the-art in robust neural network methods. In contrast to this, Generalized Matrix LVQ shows a high susceptibility to adversarial attacks, scoring consistently behind all other models. Additionally, our numerical evaluation indicates that increasing the number of prototypes per class improves the robustness of the models.Comment: to be published in 13th International Workshop on Self-Organizing Maps and Learning Vector Quantization, Clustering and Data Visualizatio

    ์ผ๋ฐ˜ํ™” ๋Šฅ๋ ฅ์˜ ํ–ฅ์ƒ์„ ์œ„ํ•œ ๊นŠ์€ ์‹ ๊ฒฝ๋ง ์–‘์žํ™”

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€,2020. 2. ์„ฑ์›์šฉ.์ตœ๊ทผ ๊นŠ์€ ์‹ ๊ฒฝ๋ง(deep neural network, DNN)์€ ์˜์ƒ, ์Œ์„ฑ ์ธ์‹ ๋ฐ ํ•ฉ์„ฑ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์—์„œ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ด๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ๋Œ€๋ถ€๋ถ„์˜ ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ ๋งŽ์€ ๊ฐ€์ค‘์น˜(parameter) ์ˆ˜์™€ ๊ณ„์‚ฐ๋Ÿ‰์„ ์š”๊ตฌํ•˜์—ฌ ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ์—์„œ์˜ ๋™์ž‘์„ ๋ฐฉํ•ดํ•œ๋‹ค. ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ ๋‚ฎ์€ ์ •๋ฐ€๋„์—์„œ๋„ ์ž˜ ๋™์ž‘ํ•˜๋Š” ์ธ๊ฐ„์˜ ์‹ ๊ฒฝ์„ธํฌ๋ฅผ ๋ชจ๋ฐฉํ•˜์˜€๊ธฐ ๋–„๋ฌธ์— ๋‚ฎ์€ ์ •๋ฐ€๋„์—์„œ๋„ ์ž˜ ๋™์ž‘ํ•  ๊ฐ€๋Šฅ์„ฑ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ์ธ๊ณต์‹ ๊ฒฝ๋ง์˜ ์–‘์žํ™”(quantization)๋Š” ์ด๋Ÿฌํ•œ ํŠน์ง•์„ ์ด์šฉํ•œ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊นŠ์€ ์‹ ๊ฒฝ๋ง ๊ณ ์ •์†Œ์ˆ˜์  ์–‘์žํ™”๋Š” 8-bit ์ด์ƒ์˜ ๋‹จ์–ด๊ธธ์ด์—์„œ ๋ถ€๋™์†Œ์ˆ˜์ ๊ณผ ์œ ์‚ฌํ•œ ์„ฑ๋Šฅ์„ ์–ป์„ ์ˆ˜์žˆ์ง€๋งŒ, ๊ทธ๋ณด๋‹ค ๋‚ฎ์€ 1-, 2-bit์—์„œ๋Š” ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง„๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๊ธฐ์กด ์—ฐ๊ตฌ๋“ค์€ ๋ถˆ๊ท ํ˜• ์–‘์žํ™”๊ธฐ๋‚˜ ์ ์‘์  ์–‘์žํ™” ๋“ฑ์˜ ๋” ์ •๋ฐ€ํ•œ ์ธ๊ณต์‹ ๊ฒฝ๋ง ์–‘์žํ™” ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ๊ธฐ์กด์˜ ์—ฐ๊ตฌ์™€ ๋งค์šฐ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ๊ณ ์ • ์†Œ์ˆ˜์  ๋„คํŠธ์›Œํฌ์˜ ์ผ๋ฐ˜ํ™”๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š”๋ฐ ์ดˆ์ ์„ ๋งž์ถ”์—ˆ์œผ๋ฉฐ, ์ด๋ฅผ ์œ„ํ•ด ์žฌํ›ˆ๋ จ(retraining) ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ์–‘์žํ™”๋œ ์ธ๊ณต์‹ ๊ฒฝ๋ง์˜ ์„ฑ๋Šฅ์„ ๋ถ„์„ํ•œ๋‹ค. ์„ฑ๋Šฅ ๋ถ„์„์€ ๋ ˆ์ด์–ด๋ณ„ ๋ฏผ๊ฐ๋„ ์ธก์ •(layer-wise sensitivity analysis)์— ๊ธฐ๋ฐ˜ํ•œ๋‹ค. ๋˜ํ•œ ์–‘์žํ™” ๋ชจ๋ธ์˜ ๋„“์ด์™€ ๊นŠ์ด์— ๋”ฐ๋ฅธ ์„ฑ๋Šฅ๋„ ๋ถ„์„ํ•œ๋‹ค. ๋ถ„์„๋œ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์–‘์žํ™” ์Šคํ… ์ ์‘ ํ›ˆ๋ จ๋ฒ•(quantization step size adaptation)๊ณผ ์ ์ง„์  ์–‘์žํ™” ํ›ˆ๋ จ ๋ฐฉ๋ฒ•(gradual quantization)์„ ์ œ์•ˆํ•œ๋‹ค. ์–‘์žํ™”๋œ ์‹ ๊ฒฝ๋ง ํ›ˆ๋ จ์‹œ ์–‘์žํ™” ๋…ธ์ด์ฆˆ๋ฅผ ์ ๋‹นํžˆ ์กฐ์ •ํ•˜์—ฌ ์†์‹ค ํ‰๋ฉด(loss surface)์ƒ์— ํ‰ํ‰ํ•œ ๋ฏธ๋‹ˆ๋งˆ(minima)์— ๋„๋‹ฌ ํ•  ์ˆ˜ ์žˆ๋Š” ์–‘์žํ™” ํ›ˆ๋ จ ๋ฐฉ๋ฒ• ๋˜ํ•œ ์ œ์•ˆํ•œ๋‹ค. HLHLp (high-low-high-low-precision)๋กœ ๋ช…๋ช…๋œ ํ›ˆ๋ จ ๋ฐฉ๋ฒ•์€ ์–‘์žํ™” ์ •๋ฐ€๋„๋ฅผ ํ›ˆ๋ จ์ค‘์— ๋†’๊ฒŒ-๋‚ฎ๊ฒŒ-๋†’๊ฒŒ-๋‚ฎ๊ฒŒ ๋ฐ”๊พธ๋ฉด์„œ ํ›ˆ๋ จํ•œ๋‹ค. ํ›ˆ๋ จ๋ฅ (learning rate)๋„ ์–‘์žํ™” ์Šคํ… ์‚ฌ์ด์ฆˆ๋ฅผ ๊ณ ๋ คํ•˜์—ฌ ์œ ๋™์ ์œผ๋กœ ๋ฐ”๋€๋‹ค. ์ œ์•ˆํ•˜๋Š” ํ›ˆ๋ จ๋ฐฉ๋ฒ•์€ ์ผ๋ฐ˜์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ ํ›ˆ๋ จ๋œ ์–‘์žํ™” ๋ชจ๋ธ์— ๋น„ํ•ด ์ƒ๋‹นํžˆ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์˜€๋‹ค. ๋˜ํ•œ ์„ ํ›ˆ๋ จ๋œ ์„ ์ƒ ๋ชจ๋ธ๋กœ ํ•™์ƒ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ์ง€์‹ ์ฆ๋ฅ˜(knowledge distillation, KD) ๊ธฐ์ˆ ์„ ์ด์šฉํ•˜์—ฌ ์–‘์žํ™”์˜ ์„ฑ๋Šฅ์„ ๋†’์ด๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ํŠนํžˆ ์„ ์ƒ ๋ชจ๋ธ์„ ์„ ํƒํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ์ง€์‹ ์ฆ๋ฅ˜์˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ์„ฑ๋Šฅ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ๋ถ„์„ํ•œ๋‹ค. ๋ถ€๋™์†Œ์ˆ˜์  ์„ ์ƒ๋ชจ๋ธ๊ณผ ์–‘์žํ™” ๋œ ์„ ์ƒ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ์‹œํ‚จ ๊ฒฐ๊ณผ ์„ ์ƒ ๋ชจ๋ธ์ด ๋งŒ๋“ค์–ด๋‚ด๋Š” ์†Œํ”„ํŠธ๋งฅ์Šค(softmax) ๋ถ„ํฌ๊ฐ€ ์ง€์‹์ฆ๋ฅ˜ํ•™์Šต ๊ฒฐ๊ณผ์— ํฌ๊ฒŒ ์˜ํ–ฅ์„ ์ฃผ๋Š” ๊ฒƒ์„ ๋ฐœ๊ฒฌํ•˜์˜€๋‹ค. ์†Œํ”„ํŠธ๋งฅ์Šค ๋ถ„ํฌ๋Š” ์ง€์‹์ฆ๋ฅ˜์˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋“ค์„ ํ†ตํ•ด ์กฐ์ ˆ๋ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ง€์‹์ฆ๋ฅ˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋“ค๊ฐ„์˜ ์—ฐ๊ด€๊ด€๊ณ„ ๋ถ„์„์„ ํ†ตํ•ด ๋†’์€ ์„ฑ๋Šฅ์„ ์–ป์„ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ ์ ์ง„์ ์œผ๋กœ ์†Œํ”„ํŠธ ์†์‹ค ํ•จ์ˆ˜ ๋ฐ˜์˜ ๋น„์œจ์„ ํ›ˆ๋ จ์ค‘์— ์ค„์—ฌ๊ฐ€๋Š” ์ ์ง„์  ์†Œํ”„ํŠธ ์†์‹ค ๊ฐ์†Œ(gradual soft loss reducing)๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์—ฌ๋Ÿฌ ์–‘์žํ™”๋ชจ๋ธ์„ ํ‰๊ท ๋‚ด์–ด ๋†’์€ ์ผ๋ฐ˜ํ™” ๋Šฅ๋ ฅ์„ ๊ฐ–๋Š” ์–‘์žํ™” ๋ชจ๋ธ์„ ์–ป๋Š” ํ›ˆ๋ จ ๋ฐฉ๋ฒ•์ธ ํ™•๋ฅ  ์–‘์žํ™” ๊ฐ€์ค‘์น˜ ํ‰๊ท (stochastic quantized weight averaging, SQWA) ํ›ˆ๋ จ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์€ (1) ๋ถ€๋™์†Œ์ˆ˜์  ํ›ˆ๋ จ, (2) ๋ถ€๋™์†Œ์ˆ˜์  ๋ชจ๋ธ์˜ ์ง์ ‘ ์–‘์žํ™”(direct quantization), (3) ์žฌํ›ˆ๋ จ(retraining)๊ณผ์ •์—์„œ ์ง„๋™ ํ›ˆ๋ จ์œจ(cyclical learning rate)์„ ์‚ฌ์šฉํ•˜์—ฌ ํœธ๋ จ์œจ์ด ์ง„๋™๋‚ด์—์„œ ๊ฐ€์žฅ ๋‚ฎ์„ ๋•Œ ๋ชจ๋ธ๋“ค์„ ์ €์žฅ, (4) ์ €์žฅ๋œ ๋ชจ๋ธ๋“ค์„ ํ‰๊ท , (5) ํ‰๊ท  ๋œ ๋ชจ๋ธ์„ ๋‚ฎ์€ ํ›ˆ๋ จ์œจ๋กœ ์žฌ์กฐ์ • ํ•˜๋Š” ๋‹ค์ค‘ ๋‹จ๊ณ„ ํ›ˆ๋ จ๋ฒ•์ด๋‹ค. ์ถ”๊ฐ€๋กœ ์–‘์žํ™” ๊ฐ€์ค‘์น˜ ๋„๋ฉ”์ธ์—์„œ ์—ฌ๋Ÿฌ ์–‘์žํ™” ๋ชจ๋ธ๋“ค์„ ํ•˜๋‚˜์˜ ์†์‹คํ‰๋ฉด๋‚ด์— ๋™์‹œ์— ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋Š” ์‹ฌ์ƒ(visualization) ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ์‹ฌ์ƒ ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด SQWA๋กœ ํ›ˆ๋ จ๋œ ์–‘์žํ™” ๋ชจ๋ธ์€ ์†์‹คํ‰๋ฉด์˜ ๊ฐ€์šด๋ฐ ๋ถ€๋ถ„์— ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์˜€๋‹ค.Deep neural networks (DNNs) achieve state-of-the-art performance for various applications such as image recognition and speech synthesis across different fields. However, their implementation in embedded systems is difficult owing to the large number of associated parameters and high computational costs. In general, DNNs operate well using low-precision parameters because they mimic the operation of human neurons; therefore, quantization of DNNs could further improve their operational performance. In many applications, word-length larger than 8 bits leads to DNN performance comparable to that of a full-precision model; however, shorter word-length such as those of 1 or 2 bits can result in significant performance degradation. To alleviate this problem, complex quantization methods implemented via asymmetric or adaptive quantizers have been employed in previous works. In contrast, in this study, we propose a different approach for quantization of DNNs. In particular, we focus on improving the generalization capability of quantized DNNs (QDNNs) instead of employing complex quantizers. To this end, first, we analyze the performance characteristics of quantized DNNs using a retraining algorithm; we employ layer-wise sensitivity analysis to investigate the quantization characteristics of each layer. In addition, we analyze the differences in QDNN performance for different quantized network sizes. Based on our analyses, two simple quantization training techniques, namely \textit{adaptive step size retraining} and \textit{gradual quantization} are proposed. Furthermore, a new training scheme for QDNNs is proposed, which is referred to as high-low-high-low-precision (HLHLp) training scheme, that allows the network to achieve flat minima on its loss surface with the aid of quantization noise. As the name suggests, the proposed training method employs high-low-high-low precision for network training in an alternating manner. Accordingly, the learning rate is also abruptly changed at each stage. Our obtained analysis results include that the proposed training technique leads to good performance improvement for QDNNs compared with previously reported fine tuning-based quantization schemes. Moreover, the knowledge distillation (KD) technique that utilizes a pre-trained teacher model for training a student network is exploited for the optimization of the QDNNs. We explore the effect of teacher network selection and investigate that of different hyperparameters on the quantization of DNNs using KD. In particular, we use several large floating-point and quantized models as teacher networks. Our experiments indicate that, for effective KD training, softmax distribution produced by a teacher network is more important than its performance. Furthermore, because softmax distribution of a teacher network can be controlled using KD hyperparameters, we analyze the interrelationship of each KD component for QDNN training. We show that even a small teacher model can achieve the same distillation performance as a larger teacher model. We also propose the gradual soft loss reducing (GSLR) technique for robust KD-based QDNN optimization, wherein the mixing ratio of hard and soft losses during training is controlled. In addition, we present a new QDNN optimization approach, namely \textit{stochastic quantized weight averaging} (SQWA), to design low-precision DNNs with good generalization capability using model averaging. The proposed approach includes (1) floating-point model training, (2) direct quantization of weights, (3) capture of multiple low-precision models during retraining with cyclical learning rate, (4) averaging of the captured models, and (5) re-quantization of the averaged model and its fine-tuning with low learning rate. Additionally, we present a loss-visualization technique for the quantized weight domain to elucidate the behavior of the proposed method. Our visualization results indicate that a QDNN optimized using our proposed approach is located near the center of the flat minimum on the loss surface.1.Introduction 1 1.1 Quantization of Deep Neural Networks 1 1.2 Generalization Capability of DNNs 3 1.3 Improved Generalization Capability of QDNNs 3 1.4 Outline of the Dissertation 5 2. Analysis of Fixedpoint Quantization of Deep Neural Networks 6 2.1 Introduction 6 2.2 Fixedpoint Performance Analysis of Deep Neural Networks 8 2.2.1 Model Design of Deep Neural Networks 8 2.2.2 Retrainbased Weight Quantization 10 2.2.3 Quantization Sensitivity Analysis 12 2.2.4 Empirical Analysis 13 2.3 Step Size Adaptation and Gradual Quantization for Retraining of DeepNeural Networks 22 2.3.1 Stepsize adaptation during retraining 22 2.3.2 Gradual quantization scheme 24 2.3.3 Experimental Results 24 2.4 Concluding remarks 30 3. HLHLp:Quantized Neural Networks Training for Reaching Flat Minimain Loss Surface 32 3.1 Introduction 32 3.2 Related Works 33 3.2.1 Quantization of Deep Neural Networks 33 3.2.2 Flat Minima in Loss Surfaces 34 3.3 Training QDNN for IMproved Generalization Capability 35 3.3.1 Analysis of Training with Quantized Weights 35 3.3.2 Highlowhighlowprecision Training 38 3.4 Experimental Results 40 3.4.1 Image Classification with CNNs 41 3.4.2 Language Modeling on PTB and WikiText2 44 3.4.3 Speech Recognition on WSJ Corpus 48 3.4.4 Discussion 49 3.5 Concluding Remarks 55 4 Knowledge Distillation for Optimization of Quantized Deep Neural Networks 56 4.1 Introduction 56 4.2 Quantized Deep Neural Netowrk Training Using Knowledge Distillation 57 4.2.1 Quantization of deep neural networks and knowledge distillation 58 4.2.2 Teacher model selection for KD 59 4.2.3 Discussion on hyperparameters of KD 62 4.3 Experimental Results 62 4.3.1 Experimental setup 62 4.3.2 Results on CIFAR10 and CIFAR100 64 4.3.3 Model size and temperature 66 4.3.4 Gradual Soft Loss Reducing 68 4.4 Concluding Remarks 68 5 SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of LowPrecision Deep Neural Networks 70 5.1 Introduction 70 5.2 Related works 71 5.2.1 Quantization of deep neural networks for efficient implementations 71 5.2.2 Stochastic weight averaging and losssurface visualization 72 5.3 Quantization of DNN and loss surface visualization 73 5.3.1 Quantization of deep neural networks 73 5.3.2 Loss surface visualization for QDNNs 75 5.4 SQWA algorithm 76 5.5 Experimental results 80 5.5.1 CIFAR100 80 5.5.2 ImageNet 87 5.6 Concluding remarks 90 6 Conclusion 92 Abstract (In Korean) 110Docto
    • โ€ฆ
    corecore