423 research outputs found
Formula of Entropy along Unstable Foliations for Diffeomorphisms with Dominated Splitting
Metric entropies along a hierarchy of unstable foliations are investigated
for diffeomorphisms with dominated splitting. The analogues of Ruelle's
inequality and Pesin's formula, which relate the metric entropy and Lyapunov
exponents in each hierarchy, are given
HAQ: Hardware-Aware Automated Quantization with Mixed Precision
Model quantization is a widely used technique to compress and accelerate deep
neural network (DNN) inference. Emergent DNN hardware accelerators begin to
support mixed precision (1-8 bits) to further improve the computation
efficiency, which raises a great challenge to find the optimal bitwidth for
each layer: it requires domain experts to explore the vast design space trading
off among accuracy, latency, energy, and model size, which is both
time-consuming and sub-optimal. Conventional quantization algorithm ignores the
different hardware architectures and quantizes all the layers in a uniform way.
In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ)
framework which leverages the reinforcement learning to automatically determine
the quantization policy, and we take the hardware accelerator's feedback in the
design loop. Rather than relying on proxy signals such as FLOPs and model size,
we employ a hardware simulator to generate direct feedback signals (latency and
energy) to the RL agent. Compared with conventional methods, our framework is
fully automated and can specialize the quantization policy for different neural
network architectures and hardware architectures. Our framework effectively
reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with
negligible loss of accuracy compared with the fixed bitwidth (8 bits)
quantization. Our framework reveals that the optimal policies on different
hardware architectures (i.e., edge and cloud architectures) under different
resource constraints (i.e., latency, energy and model size) are drastically
different. We interpreted the implication of different quantization policies,
which offer insights for both neural network architecture design and hardware
architecture design.Comment: CVPR 2019. The first three authors contributed equally to this work.
Project page: https://hanlab.mit.edu/projects/haq
Hardware-Centric AutoML for Mixed-Precision Quantization
Model quantization is a widely used technique to compress and accelerate deep
neural network (DNN) inference. Emergent DNN hardware accelerators begin to
support mixed precision (1-8 bits) to further improve the computation
efficiency, which raises a great challenge to find the optimal bitwidth for
each layer: it requires domain experts to explore the vast design space trading
off among accuracy, latency, energy, and model size, which is both
time-consuming and sub-optimal. Conventional quantization algorithm ignores the
different hardware architectures and quantizes all the layers in a uniform way.
In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ)
framework which leverages the reinforcement learning to automatically determine
the quantization policy, and we take the hardware accelerator's feedback in the
design loop. Rather than relying on proxy signals such as FLOPs and model size,
we employ a hardware simulator to generate direct feedback signals (latency and
energy) to the RL agent. Compared with conventional methods, our framework is
fully automated and can specialize the quantization policy for different neural
network architectures and hardware architectures. Our framework effectively
reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with
negligible loss of accuracy compared with the fixed bitwidth (8 bits)
quantization. Our framework reveals that the optimal policies on different
hardware architectures (i.e., edge and cloud architectures) under different
resource constraints (i.e., latency, energy, and model size) are drastically
different. We interpreted the implication of different quantization policies,
which offer insights for both neural network architecture design and hardware
architecture design.Comment: Journal preprint of arXiv:1811.08886 (IJCV, 2020). The first three
authors contributed equally to this work. Project page:
https://hanlab.mit.edu/projects/haq
NetGPT: Generative Pretrained Transformer for Network Traffic
Pretrained models for network traffic can utilize large-scale raw data to
learn the essential characteristics of network traffic, and generate
distinguishable results for input traffic without considering specific
downstream tasks. Effective pretrained models can significantly optimize the
training efficiency and effectiveness of downstream tasks, such as traffic
classification, attack detection, resource scheduling, protocol analysis, and
traffic generation. Despite the great success of pretraining in natural
language processing, there is no work in the network field. Considering the
diverse demands and characteristics of network traffic and network tasks, it is
non-trivial to build a pretrained model for network traffic and we face various
challenges, especially the heterogeneous headers and payloads in the
multi-pattern network traffic and the different dependencies for contexts of
diverse downstream network tasks.
To tackle these challenges, in this paper, we make the first attempt to
provide a generative pretrained model for both traffic understanding and
generation tasks. We propose the multi-pattern network traffic modeling to
construct unified text inputs and support both traffic understanding and
generation tasks. We further optimize the adaptation effect of the pretrained
model to diversified tasks by shuffling header fields, segmenting packets in
flows, and incorporating diverse task labels with prompts. Expensive
experiments demonstrate the effectiveness of our NetGPT in a range of traffic
understanding and generation tasks, and outperform state-of-the-art baselines
by a wide margin
- …