1 research outputs found
Energy-Efficient Accelerator Design for Deformable Convolution Networks
Deformable convolution networks (DCNs) proposed to address the image
recognition with geometric or photometric variations typically involve
deformable convolution that convolves on arbitrary locations of input features.
The locations change with different inputs and induce considerable dynamic and
irregular memory accesses which cannot be handled by classic neural network
accelerators (NNAs). Moreover, bilinear interpolation (BLI) operation that is
required to obtain deformed features in DCNs also cannot be deployed on
existing NNAs directly. Although a general purposed processor (GPP) seated
along with classic NNAs can process the deformable convolution, the processing
on GPP can be extremely slow due to the lack of parallel computing capability.
To address the problem, we develop a DCN accelerator on existing NNAs to
support both the standard convolution and deformable convolution. Specifically,
for the dynamic and irregular accesses in DCNs, we have both the input and
output features divided into tiles and build a tile dependency table (TDT) to
track the irregular tile dependency at runtime. With the TDT, we further
develop an on-chip tile scheduler to handle the dynamic and irregular accesses
efficiently. In addition, we propose a novel mapping strategy to enable
parallel BLI processing on NNAs and apply layer fusion techniques for more
energy-efficient DCN processing. According to our experiments, the proposed
accelerator achieves orders of magnitude higher performance and energy
efficiency compared to the typical computing architectures including ARM,
ARM+TPU, and GPU with 6.6\% chip area penalty to a classic NNA