16 research outputs found
Reconfigurable Distributed FPGA Cluster Design for Deep Learning Accelerators
We propose a distributed system based on lowpower embedded FPGAs designed for
edge computing applications focused on exploring distributing scheduling
optimizations for Deep Learning (DL) workloads to obtain the best performance
regarding latency and power efficiency. Our cluster was modular throughout the
experiment, and we have implementations that consist of up to 12 Zynq-7020
chip-based boards as well as 5 UltraScale+ MPSoC FPGA boards connected through
an ethernet switch, and the cluster will evaluate configurable Deep Learning
Accelerator (DLA) Versatile Tensor Accelerator (VTA). This adaptable
distributed architecture is distinguished by its capacity to evaluate and
manage neural network workloads in numerous configurations which enables users
to conduct multiple experiments tailored to their specific application needs.
The proposed system can simultaneously execute diverse Neural Network (NN)
models, arrange the computation graph in a pipeline structure, and manually
allocate greater resources to the most computationally intensive layers of the
NN graph.Comment: 4 pages of content, 1 page for references. 4 Figures, 1 table.
Conference Paper (IEEE International Conference on Electro Information
Technology (eit2023) at Lewis University in Romeoville, IL
Performance Analysis of DNN Inference/Training with Convolution and non-Convolution Operations
Today's performance analysis frameworks for deep learning accelerators suffer
from two significant limitations. First, although modern convolutional neural
network (CNNs) consist of many types of layers other than convolution,
especially during training, these frameworks largely focus on convolution
layers only. Second, these frameworks are generally targeted towards inference,
and lack support for training operations. This work proposes a novel
performance analysis framework, SimDIT, for general ASIC-based systolic
hardware accelerator platforms. The modeling effort of SimDIT comprehensively
covers convolution and non-convolution operations of both CNN inference and
training on a highly parameterizable hardware substrate. SimDIT is integrated
with a backend silicon implementation flow and provides detailed end-to-end
performance statistics (i.e., data access cost, cycle counts, energy, and
power) for executing CNN inference and training workloads. SimDIT-enabled
performance analysis reveals that on a 64X64 processing array, non-convolution
operations constitute 59.5% of total runtime for ResNet-50 training workload.
In addition, by optimally distributing available off-chip DRAM bandwidth and
on-chip SRAM resources, SimDIT achieves 18X performance improvement over a
generic static resource allocation for ResNet-50 inference