211 research outputs found
Automatic generation of multi-precision multi-arithmetic CNN accelerators for FPGAs
Modern deep Convolutional Neural Networks (CNNs) are computationally
demanding, yet real applications often require high throughput and low latency.
To help tackle these problems, we propose Tomato, a framework designed to
automate the process of generating efficient CNN accelerators. The generated
design is pipelined and each convolution layer uses different arithmetics at
various precisions. Using Tomato, we showcase state-of-the-art multi-precision
multi-arithmetic networks, including MobileNet-V1, running on FPGAs. To our
knowledge, this is the first multi-precision multi-arithmetic auto-generation
framework for CNNs. In software, Tomato fine-tunes pretrained networks to use a
mixture of short powers-of-2 and fixed-point weights with a minimal loss in
classification accuracy. The fine-tuned parameters are combined with the
templated hardware designs to automatically produce efficient inference
circuits in FPGAs. We demonstrate how our approach significantly reduces model
sizes and computation complexities, and permits us to pack a complete ImageNet
network onto a single FPGA without accessing off-chip memories for the first
time. Furthermore, we show how Tomato produces implementations of networks with
various sizes running on single or multiple FPGAs. To the best of our
knowledge, our automatically generated accelerators outperform closest
FPGA-based competitors by at least 2-4x for lantency and throughput; the
generated accelerator runs ImageNet classification at a rate of more than 3000
frames per second.EPSRC Doctoral Scholarship
Peterhouse Graduate Studentshi
A Construction Kit for Efficient Low Power Neural Network Accelerator Designs
Implementing embedded neural network processing at the edge requires
efficient hardware acceleration that couples high computational performance
with low power consumption. Driven by the rapid evolution of network
architectures and their algorithmic features, accelerator designs are
constantly updated and improved. To evaluate and compare hardware design
choices, designers can refer to a myriad of accelerator implementations in the
literature. Surveys provide an overview of these works but are often limited to
system-level and benchmark-specific performance metrics, making it difficult to
quantitatively compare the individual effect of each utilized optimization
technique. This complicates the evaluation of optimizations for new accelerator
designs, slowing-down the research progress. This work provides a survey of
neural network accelerator optimization approaches that have been used in
recent works and reports their individual effects on edge processing
performance. It presents the list of optimizations and their quantitative
effects as a construction kit, allowing to assess the design choices for each
building block separately. Reported optimizations range from up to 10'000x
memory savings to 33x energy reductions, providing chip designers an overview
of design choices for implementing efficient low power neural network
accelerators
- …