2 research outputs found
DeepBurning-MixQ: An Open Source Mixed-Precision Neural Network Accelerator Design Framework for FPGAs
Mixed-precision neural networks (MPNNs) that enable the use of just enough
data width for a deep learning task promise significant advantages of both
inference accuracy and computing overhead. FPGAs with fine-grained
reconfiguration capability can adapt the processing with distinct data width
and models, and hence, can theoretically unleash the potential of MPNNs.
Nevertheless, commodity DPUs on FPGAs mostly emphasize generality and have
limited support for MPNNs especially the ones with lower data width. In
addition, primitive DSPs in FPGAs usually have much larger data width than that
is required by MPNNs and haven't been sufficiently co-explored with MPNNs yet.
To this end, we propose an open source MPNN accelerator design framework
specifically tailored for FPGAs. In this framework, we have a systematic
DSP-packing algorithm to pack multiple lower data width MACs in a single
primitive DSP and enable efficient implementation of MPNNs. Meanwhile, we take
DSP packing efficiency into consideration with MPNN quantization within a
unified neural network architecture search (NAS) framework such that it can be
aware of the DSP overhead during quantization and optimize the MPNN performance
and accuracy concurrently. Finally, we have the optimized MPNN fine-tuned to a
fully pipelined neural network accelerator template based on HLS and make best
use of available resources for higher performance. Our experiments reveal the
resulting accelerators produced by the proposed framework can achieve
overwhelming advantages in terms of performance, resource utilization, and
inference accuracy for MPNNs when compared with both handcrafted counterparts
and prior hardware-aware neural network accelerators on FPGAs.Comment: Accepted by 2023 IEEE/ACM International Conference on Computer-Aided
Design (ICCAD