389 research outputs found
Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design
Graph neural networks (GNNs) have shown significant accuracy improvements in
a variety of graph learning domains, sparking considerable research interest.
To translate these accuracy improvements into practical applications, it is
essential to develop high-performance and efficient hardware acceleration for
GNN models. However, designing GNN accelerators faces two fundamental
challenges: the high bandwidth requirement of GNN models and the diversity of
GNN models. Previous works have addressed the first challenge by using more
expensive memory interfaces to achieve higher bandwidth. For the second
challenge, existing works either support specific GNN models or have generic
designs with poor hardware utilization.
In this work, we tackle both challenges simultaneously. First, we identify a
new type of partition-level operator fusion, which we utilize to internally
reduce the high bandwidth requirement of GNNs. Next, we introduce
partition-level multi-threading to schedule the concurrent processing of graph
partitions, utilizing different hardware resources. To further reduce the extra
on-chip memory required by multi-threading, we propose fine-grained graph
partitioning to generate denser graph partitions. Importantly, these three
methods make no assumptions about the targeted GNN models, addressing the
challenge of model variety. We implement these methods in a framework called
SwitchBlade, consisting of a compiler, a graph partitioner, and a hardware
accelerator. Our evaluation demonstrates that SwitchBlade achieves an average
speedup of and energy savings of compared to the
NVIDIA V100 GPU. Additionally, SwitchBlade delivers performance comparable to
state-of-the-art specialized accelerators
Efficient Adaptive Activation Rounding for Post-Training Quantization
Post-training quantization attracts increasing attention due to its
convenience in deploying quantized neural networks. Although
rounding-to-nearest remains the prevailing method for DNN quantization, prior
research has demonstrated its suboptimal nature when applied to weight
quantization. They propose optimizing weight rounding schemes by leveraging
output error rather than the traditional weight quantization error. Our study
reveals that similar rounding challenges also extend to activation
quantization. Despite the easy generalization, the challenges lie in the
dynamic nature of activation. Adaptive rounding is expected for varying
activations and the method is subjected to runtime overhead. To tackle this, we
propose the AQuant quantization framework with a novel perspective to reduce
output error by adjusting rounding schemes of activations. Instead of using the
constant rounding border 0.5 of the rounding-to-nearest operation, we make the
border become a function w.r.t. the activation value to change the activation
rounding by the adaptive border. To deal with the runtime overhead, we use a
coarse-grained version of the border function. Finally, we introduce our
framework to optimize the border function. Extensive experiments show that
AQuant achieves notable improvements compared to state-of-the-art works and
pushes the accuracy of ResNet-18 up to 60.31% under the 2-bit weight and
activation quantization
Nesting Forward Automatic Differentiation for Memory-Efficient Deep Neural Network Training
An activation function is an element-wise mathematical function and plays a
crucial role in deep neural networks (DNN). Many novel and sophisticated
activation functions have been proposed to improve the DNN accuracy but also
consume massive memory in the training process with back-propagation. In this
study, we propose the nested forward automatic differentiation (Forward-AD),
specifically for the element-wise activation function for memory-efficient DNN
training. We deploy nested Forward-AD in two widely-used deep learning
frameworks, TensorFlow and PyTorch, which support the static and dynamic
computation graph, respectively. Our evaluation shows that nested Forward-AD
reduces the memory footprint by up to 1.97x than the baseline model and
outperforms the recomputation by 20% under the same memory reduction ratio.Comment: 8 pages, ICCD 202
- …