520 research outputs found
A Comprehensive Survey on Model Quantization for Deep Neural Networks in Image Classification
Recent advancements in machine learning achieved by Deep Neural Networks
(DNNs) have been significant. While demonstrating high accuracy, DNNs are
associated with a huge number of parameters and computation, which leads to
high memory usage and energy consumption. As a result, deploying of DNNs on
devices with constrained hardware resources poses significant challenges. To
overcome this, various compression techniques have been widely employed to
optimize DNN accelerators. A promising approach is quantization, in which the
full-precision values are stored in low bit-width precision. Quantization not
only reduces memory requirements but also replaces high-cost operations with
low-cost ones. DNN quantization offers flexibility and efficiency in hardware
design, making it a widely adopted technique in various methods. Since
quantization has been extensively utilized in previous works, there is a need
for an integrated report that provides an understanding, analysis, and
comparison of different quantization approaches. Consequently, we present a
comprehensive survey of quantization concepts and methods, with a focus on
image classification. We describe clustering-based quantization methods and
explore the use of a scale factor parameter for approximating full-precision
values. Moreover, we thoroughly review the training of quantized DNN, including
the use of straight-through estimator and quantized regularization. We explain
the replacement of floating-point operations with low-cost bitwise operations
in a quantized DNN and the sensitivity of different layers in quantization.
Furthermore, we highlight the evaluation metrics for quantized methods and
important benchmarks in image classification task. We also present the accuracy
of the state-of-the-art methods on CIFAR-10 and ImageNet.Comment: The title of the paper has been changed. The abstract has been
improved. The grammatical errors have been corrected. The structure of the
paper has been modified. Some new and important references have been added.
Some of the used abbreviations in the paper have been corrected. The
discussion of some important topics has been extended. Some figures have been
improve
Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient
abstract: Deep neural networks (DNNs) have had tremendous success in a variety of
statistical learning applications due to their vast expressive power. Most
applications run DNNs on the cloud on parallelized architectures. There is a need
for for efficient DNN inference on edge with low precision hardware and analog
accelerators. To make trained models more robust for this setting, quantization and
analog compute noise are modeled as weight space perturbations to DNNs and an
information theoretic regularization scheme is used to penalize the KL-divergence
between perturbed and unperturbed models. This regularizer has similarities to
both natural gradient descent and knowledge distillation, but has the advantage of
explicitly promoting the network to and a broader minimum that is robust to
weight space perturbations. In addition to the proposed regularization,
KL-divergence is directly minimized using knowledge distillation. Initial validation
on FashionMNIST and CIFAR10 shows that the information theoretic regularizer
and knowledge distillation outperform existing quantization schemes based on the
straight through estimator or L2 constrained quantization.Dissertation/ThesisMasters Thesis Computer Engineering 201
- …