Deep neural networks (DNNs) have been shown to outperform conventional
machine learning algorithms across a wide range of applications, e.g., image
recognition, object detection, robotics, and natural language processing.
However, the high computational complexity of DNNs often necessitates extremely
fast and efficient hardware. The problem gets worse as the size of neural
networks grows exponentially. As a result, customized hardware accelerators
have been developed to accelerate DNN processing without sacrificing model
accuracy. However, previous accelerator design studies have not fully
considered the characteristics of the target applications, which may lead to
sub-optimal architecture designs. On the other hand, new DNN models have been
developed for better accuracy, but their compatibility with the underlying
hardware accelerator is often overlooked. In this article, we propose an
application-driven framework for architectural design space exploration of DNN
accelerators. This framework is based on a hardware analytical model of
individual DNN operations. It models the accelerator design task as a
multi-dimensional optimization problem. We demonstrate that it can be
efficaciously used in application-driven accelerator architecture design. Given
a target DNN, the framework can generate efficient accelerator design solutions
with optimized performance and area. Furthermore, we explore the opportunity to
use the framework for accelerator configuration optimization under simultaneous
diverse DNN applications. The framework is also capable of improving neural
network models to best fit the underlying hardware resources