106 research outputs found
HAQ: Hardware-Aware Automated Quantization with Mixed Precision
Model quantization is a widely used technique to compress and accelerate deep
neural network (DNN) inference. Emergent DNN hardware accelerators begin to
support mixed precision (1-8 bits) to further improve the computation
efficiency, which raises a great challenge to find the optimal bitwidth for
each layer: it requires domain experts to explore the vast design space trading
off among accuracy, latency, energy, and model size, which is both
time-consuming and sub-optimal. Conventional quantization algorithm ignores the
different hardware architectures and quantizes all the layers in a uniform way.
In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ)
framework which leverages the reinforcement learning to automatically determine
the quantization policy, and we take the hardware accelerator's feedback in the
design loop. Rather than relying on proxy signals such as FLOPs and model size,
we employ a hardware simulator to generate direct feedback signals (latency and
energy) to the RL agent. Compared with conventional methods, our framework is
fully automated and can specialize the quantization policy for different neural
network architectures and hardware architectures. Our framework effectively
reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with
negligible loss of accuracy compared with the fixed bitwidth (8 bits)
quantization. Our framework reveals that the optimal policies on different
hardware architectures (i.e., edge and cloud architectures) under different
resource constraints (i.e., latency, energy and model size) are drastically
different. We interpreted the implication of different quantization policies,
which offer insights for both neural network architecture design and hardware
architecture design.Comment: CVPR 2019. The first three authors contributed equally to this work.
Project page: https://hanlab.mit.edu/projects/haq
The Effects of Approximate Multiplication on Convolutional Neural Networks
This paper analyzes the effects of approximate multiplication when performing
inferences on deep convolutional neural networks (CNNs). The approximate
multiplication can reduce the cost of the underlying circuits so that CNN
inferences can be performed more efficiently in hardware accelerators. The
study identifies the critical factors in the convolution, fully-connected, and
batch normalization layers that allow more accurate CNN predictions despite the
errors from approximate multiplication. The same factors also provide an
arithmetic explanation of why bfloat16 multiplication performs well on CNNs.
The experiments are performed with recognized network architectures to show
that the approximate multipliers can produce predictions that are nearly as
accurate as the FP32 references, without additional training. For example, the
ResNet and Inception-v4 models with Mitch-6 multiplication produces Top-5
errors that are within 0.2% compared to the FP32 references. A brief cost
comparison of Mitch-6 against bfloat16 is presented, where a MAC operation
saves up to 80% of energy compared to the bfloat16 arithmetic. The most
far-reaching contribution of this paper is the analytical justification that
multiplications can be approximated while additions need to be exact in CNN MAC
operations.Comment: 12 pages, 11 figures, 4 tables, accepted for publication in the IEEE
Transactions on Emerging Topics in Computin
Lite it fly: An All-Deformable-Butterfly Network
Most deep neural networks (DNNs) consist fundamentally of convolutional
and/or fully connected layers, wherein the linear transform can be cast as the
product between a filter matrix and a data matrix obtained by arranging feature
tensors into columns. The lately proposed deformable butterfly (DeBut)
decomposes the filter matrix into generalized, butterflylike factors, thus
achieving network compression orthogonal to the traditional ways of pruning or
low-rank decomposition. This work reveals an intimate link between DeBut and a
systematic hierarchy of depthwise and pointwise convolutions, which explains
the empirically good performance of DeBut layers. By developing an automated
DeBut chain generator, we show for the first time the viability of homogenizing
a DNN into all DeBut layers, thus achieving an extreme sparsity and
compression. Various examples and hardware benchmarks verify the advantages of
All-DeBut networks. In particular, we show it is possible to compress a
PointNet to < 5% parameters with < 5% accuracy drop, a record not achievable by
other compression schemes.Comment: 7 pages, 3 figures, accepted as a brief paper in IEEE Transactions on
Neural Networks and Learning Systems (TNNLS
- …