4 research outputs found
HCM: Hardware-Aware Complexity Metric for Neural Network Architectures
Convolutional Neural Networks (CNNs) have become common in many fields
including computer vision, speech recognition, and natural language processing.
Although CNN hardware accelerators are already included as part of many SoC
architectures, the task of achieving high accuracy on resource-restricted
devices is still considered challenging, mainly due to the vast number of
design parameters that need to be balanced to achieve an efficient solution.
Quantization techniques, when applied to the network parameters, lead to a
reduction of power and area and may also change the ratio between communication
and computation. As a result, some algorithmic solutions may suffer from lack
of memory bandwidth or computational resources and fail to achieve the expected
performance due to hardware constraints. Thus, the system designer and the
micro-architect need to understand at early development stages the impact of
their high-level decisions (e.g., the architecture of the CNN and the amount of
bits used to represent its parameters) on the final product (e.g., the expected
power saving, area, and accuracy). Unfortunately, existing tools fall short of
supporting such decisions.
This paper introduces a hardware-aware complexity metric that aims to assist
the system designer of the neural network architectures, through the entire
project lifetime (especially at its early stages) by predicting the impact of
architectural and micro-architectural decisions on the final product. We
demonstrate how the proposed metric can help evaluate different design
alternatives of neural network models on resource-restricted devices such as
real-time embedded systems, and to avoid making design mistakes at early
stages
A Comprehensive Survey on Model Quantization for Deep Neural Networks in Image Classification
Recent advancements in machine learning achieved by Deep Neural Networks
(DNNs) have been significant. While demonstrating high accuracy, DNNs are
associated with a huge number of parameters and computation, which leads to
high memory usage and energy consumption. As a result, deploying of DNNs on
devices with constrained hardware resources poses significant challenges. To
overcome this, various compression techniques have been widely employed to
optimize DNN accelerators. A promising approach is quantization, in which the
full-precision values are stored in low bit-width precision. Quantization not
only reduces memory requirements but also replaces high-cost operations with
low-cost ones. DNN quantization offers flexibility and efficiency in hardware
design, making it a widely adopted technique in various methods. Since
quantization has been extensively utilized in previous works, there is a need
for an integrated report that provides an understanding, analysis, and
comparison of different quantization approaches. Consequently, we present a
comprehensive survey of quantization concepts and methods, with a focus on
image classification. We describe clustering-based quantization methods and
explore the use of a scale factor parameter for approximating full-precision
values. Moreover, we thoroughly review the training of quantized DNN, including
the use of straight-through estimator and quantized regularization. We explain
the replacement of floating-point operations with low-cost bitwise operations
in a quantized DNN and the sensitivity of different layers in quantization.
Furthermore, we highlight the evaluation metrics for quantized methods and
important benchmarks in image classification task. We also present the accuracy
of the state-of-the-art methods on CIFAR-10 and ImageNet.Comment: The title of the paper has been changed. The abstract has been
improved. The grammatical errors have been corrected. The structure of the
paper has been modified. Some new and important references have been added.
Some of the used abbreviations in the paper have been corrected. The
discussion of some important topics has been extended. Some figures have been
improve
A Survey on Design Methodologies for Accelerating Deep Learning on Heterogeneous Architectures
In recent years, the field of Deep Learning has seen many disruptive and
impactful advancements. Given the increasing complexity of deep neural
networks, the need for efficient hardware accelerators has become more and more
pressing to design heterogeneous HPC platforms. The design of Deep Learning
accelerators requires a multidisciplinary approach, combining expertise from
several areas, spanning from computer architecture to approximate computing,
computational models, and machine learning algorithms. Several methodologies
and tools have been proposed to design accelerators for Deep Learning,
including hardware-software co-design approaches, high-level synthesis methods,
specific customized compilers, and methodologies for design space exploration,
modeling, and simulation. These methodologies aim to maximize the exploitable
parallelism and minimize data movement to achieve high performance and energy
efficiency. This survey provides a holistic review of the most influential
design methodologies and EDA tools proposed in recent years to implement Deep
Learning accelerators, offering the reader a wide perspective in this rapidly
evolving field. In particular, this work complements the previous survey
proposed by the same authors in [203], which focuses on Deep Learning hardware
accelerators for heterogeneous HPC platforms