2 research outputs found

    A Comprehensive Survey on Model Quantization for Deep Neural Networks in Image Classification

    Full text link
    Recent advancements in machine learning achieved by Deep Neural Networks (DNNs) have been significant. While demonstrating high accuracy, DNNs are associated with a huge number of parameters and computation, which leads to high memory usage and energy consumption. As a result, deploying of DNNs on devices with constrained hardware resources poses significant challenges. To overcome this, various compression techniques have been widely employed to optimize DNN accelerators. A promising approach is quantization, in which the full-precision values are stored in low bit-width precision. Quantization not only reduces memory requirements but also replaces high-cost operations with low-cost ones. DNN quantization offers flexibility and efficiency in hardware design, making it a widely adopted technique in various methods. Since quantization has been extensively utilized in previous works, there is a need for an integrated report that provides an understanding, analysis, and comparison of different quantization approaches. Consequently, we present a comprehensive survey of quantization concepts and methods, with a focus on image classification. We describe clustering-based quantization methods and explore the use of a scale factor parameter for approximating full-precision values. Moreover, we thoroughly review the training of quantized DNN, including the use of straight-through estimator and quantized regularization. We explain the replacement of floating-point operations with low-cost bitwise operations in a quantized DNN and the sensitivity of different layers in quantization. Furthermore, we highlight the evaluation metrics for quantized methods and important benchmarks in image classification task. We also present the accuracy of the state-of-the-art methods on CIFAR-10 and ImageNet.Comment: The title of the paper has been changed. The abstract has been improved. The grammatical errors have been corrected. The structure of the paper has been modified. Some new and important references have been added. Some of the used abbreviations in the paper have been corrected. The discussion of some important topics has been extended. Some figures have been improve
    corecore