19 research outputs found
Hyperspherical Loss-Aware Ternary Quantization
Most of the existing works use projection functions for ternary quantization
in discrete space. Scaling factors and thresholds are used in some cases to
improve the model accuracy. However, the gradients used for optimization are
inaccurate and result in a notable accuracy gap between the full precision and
ternary models. To get more accurate gradients, some works gradually increase
the discrete portion of the full precision weights in the forward propagation
pass, e.g., using temperature-based Sigmoid function. Instead of directly
performing ternary quantization in discrete space, we push full precision
weights close to ternary ones through regularization term prior to ternary
quantization. In addition, inspired by the temperature-based method, we
introduce a re-scaling factor to obtain more accurate gradients by simulating
the derivatives of Sigmoid function. The experimental results show that our
method can significantly improve the accuracy of ternary quantization in both
image classification and object detection tasks
Run-Time Efficient RNN Compression for Inference on Edge Devices
Recurrent neural networks can be large and compute-intensive, yet many
applications that benefit from RNNs run on small devices with very limited
compute and storage capabilities while still having run-time constraints. As a
result, there is a need for compression techniques that can achieve significant
compression without negatively impacting inference run-time and task accuracy.
This paper explores a new compressed RNN cell implementation called Hybrid
Matrix Decomposition (HMD) that achieves this dual objective. This scheme
divides the weight matrix into two parts - an unconstrained upper half and a
lower half composed of rank-1 blocks. This results in output features where the
upper sub-vector has "richer" features while the lower-sub vector has
"constrained features". HMD can compress RNNs by a factor of 2-4x while having
a faster run-time than pruning (Zhu &Gupta, 2017) and retaining more model
accuracy than matrix factorization (Grachev et al., 2017). We evaluate this
technique on 5 benchmarks spanning 3 different applications, illustrating its
generality in the domain of edge computing.Comment: Published at 4th edition of Workshop on Energy Efficient Machine
Learning and Cognitive Computing for Embedded Applications at International
Symposium of Computer Architecture 2019, Phoenix, Arizona
(https://www.emc2-workshop.com/isca-19) colocated with ISCA 201
Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices
A recent trend in DNN development is to extend the reach of deep learning
applications to platforms that are more resource and energy constrained, e.g.,
mobile devices. These endeavors aim to reduce the DNN model size and improve
the hardware processing efficiency, and have resulted in DNNs that are much
more compact in their structures and/or have high data sparsity. These compact
or sparse models are different from the traditional large ones in that there is
much more variation in their layer shapes and sizes, and often require
specialized hardware to exploit sparsity for performance improvement. Thus,
many DNN accelerators designed for large DNNs do not perform well on these
models. In this work, we present Eyeriss v2, a DNN accelerator architecture
designed for running compact and sparse DNNs. To deal with the widely varying
layer shapes and sizes, it introduces a highly flexible on-chip network, called
hierarchical mesh, that can adapt to the different amounts of data reuse and
bandwidth requirements of different data types, which improves the utilization
of the computation resources. Furthermore, Eyeriss v2 can process sparse data
directly in the compressed domain for both weights and activations, and
therefore is able to improve both processing speed and energy efficiency with
sparse models. Overall, with sparse MobileNet, Eyeriss v2 in a 65nm CMOS
process achieves a throughput of 1470.6 inferences/sec and 2560.3 inferences/J
at a batch size of 1, which is 12.6x faster and 2.5x more energy efficient than
the original Eyeriss running MobileNet. We also present an analysis methodology
called Eyexam that provides a systematic way of understanding the performance
limits for DNN processors as a function of specific characteristics of the DNN
model and accelerator design; it applies these characteristics as sequential
steps to increasingly tighten the bound on the performance limits.Comment: accepted for publication in IEEE Journal on Emerging and Selected
Topics in Circuits and Systems. This extended version on arXiv also includes
Eyexam in the appendi