3,063 research outputs found

    HAQ: Hardware-Aware Automated Quantization with Mixed Precision

    Full text link
    Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support mixed precision (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, energy, and model size, which is both time-consuming and sub-optimal. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in a uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals (latency and energy) to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, energy and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.Comment: CVPR 2019. The first three authors contributed equally to this work. Project page: https://hanlab.mit.edu/projects/haq

    Optimizing the energy consumption of spiking neural networks for neuromorphic applications

    Full text link
    In the last few years, spiking neural networks have been demonstrated to perform on par with regular convolutional neural networks. Several works have proposed methods to convert a pre-trained CNN to a Spiking CNN without a significant sacrifice of performance. We demonstrate first that quantization-aware training of CNNs leads to better accuracy in SNNs. One of the benefits of converting CNNs to spiking CNNs is to leverage the sparse computation of SNNs and consequently perform equivalent computation at a lower energy consumption. Here we propose an efficient optimization strategy to train spiking networks at lower energy consumption, while maintaining similar accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets

    Practical quantum realization of the ampere from the electron charge

    Full text link
    One major change of the future revision of the International System of Units (SI) is a new definition of the ampere based on the elementary charge \emph{e}. Replacing the former definition based on Amp\`ere's force law will allow one to fully benefit from quantum physics to realize the ampere. However, a quantum realization of the ampere from \emph{e}, accurate to within 10−810^{-8} in relative value and fulfilling traceability needs, is still missing despite many efforts have been spent for the development of single-electron tunneling devices. Starting again with Ohm's law, applied here in a quantum circuit combining the quantum Hall resistance and Josephson voltage standards with a superconducting cryogenic amplifier, we report on a practical and universal programmable quantum current generator. We demonstrate that currents generated in the milliampere range are quantized in terms of efJef_\mathrm{J} (fJf_\mathrm{J} is the Josephson frequency) with a measurement uncertainty of 10−810^{-8}. This new quantum current source, able to deliver such accurate currents down to the microampere range, can greatly improve the current measurement traceability, as demonstrated with the calibrations of digital ammeters. Beyond, it opens the way to further developments in metrology and in fundamental physics, such as a quantum multimeter or new accurate comparisons to single electron pumps.Comment: 15 pages, 4 figure
    • …
    corecore