120 research outputs found
ReDas: Supporting Fine-Grained Reshaping and Multiple Dataflows on Systolic Array
Current systolic arrays still suffer from low performance and PE utilization
on many real workloads due to the mismatch between the fixed array topology and
diverse DNN kernels. We present ReDas, a flexible and lightweight systolic
array that can adapt to various DNN models by supporting dynamic fine-grained
reshaping and multiple dataflows. The key idea is to construct reconfigurable
roundabout data paths using only the short connections between neighbor PEs.
The array with 128128 size supports 129 different logical shapes and 3
dataflows (IS/OS/WS). Experiments on DNN models of MLPerf demonstrate that
ReDas can achieve 3.09x speedup on average compared to state-of-the-art work.Comment: 7 pages, 11 figures, conferenc
SCV-GNN: Sparse Compressed Vector-based Graph Neural Network Aggregation
Graph neural networks (GNNs) have emerged as a powerful tool to process
graph-based data in fields like communication networks, molecular interactions,
chemistry, social networks, and neuroscience. GNNs are characterized by the
ultra-sparse nature of their adjacency matrix that necessitates the development
of dedicated hardware beyond general-purpose sparse matrix multipliers. While
there has been extensive research on designing dedicated hardware accelerators
for GNNs, few have extensively explored the impact of the sparse storage format
on the efficiency of the GNN accelerators. This paper proposes SCV-GNN with the
novel sparse compressed vectors (SCV) format optimized for the aggregation
operation. We use Z-Morton ordering to derive a data-locality-based computation
ordering and partitioning scheme. The paper also presents how the proposed
SCV-GNN is scalable on a vector processing system. Experimental results over
various datasets show that the proposed method achieves a geometric mean
speedup of and over CSC and CSR aggregation
operations, respectively. The proposed method also reduces the memory traffic
by a factor of and over compressed sparse column
(CSC) and compressed sparse row (CSR), respectively. Thus, the proposed novel
aggregation format reduces the latency and memory access for GNN inference
Circuits and Systems Advances in Near Threshold Computing
Modern society is witnessing a sea change in ubiquitous computing, in which people have embraced computing systems as an indispensable part of day-to-day existence. Computation, storage, and communication abilities of smartphones, for example, have undergone monumental changes over the past decade. However, global emphasis on creating and sustaining green environments is leading to a rapid and ongoing proliferation of edge computing systems and applications. As a broad spectrum of healthcare, home, and transport applications shift to the edge of the network, near-threshold computing (NTC) is emerging as one of the promising low-power computing platforms. An NTC device sets its supply voltage close to its threshold voltage, dramatically reducing the energy consumption. Despite showing substantial promise in terms of energy efficiency, NTC is yet to see widescale commercial adoption. This is because circuits and systems operating with NTC suffer from several problems, including increased sensitivity to process variation, reliability problems, performance degradation, and security vulnerabilities, to name a few. To realize its potential, we need designs, techniques, and solutions to overcome these challenges associated with NTC circuits and systems. The readers of this book will be able to familiarize themselves with recent advances in electronics systems, focusing on near-threshold computing
A Construction Kit for Efficient Low Power Neural Network Accelerator Designs
Implementing embedded neural network processing at the edge requires
efficient hardware acceleration that couples high computational performance
with low power consumption. Driven by the rapid evolution of network
architectures and their algorithmic features, accelerator designs are
constantly updated and improved. To evaluate and compare hardware design
choices, designers can refer to a myriad of accelerator implementations in the
literature. Surveys provide an overview of these works but are often limited to
system-level and benchmark-specific performance metrics, making it difficult to
quantitatively compare the individual effect of each utilized optimization
technique. This complicates the evaluation of optimizations for new accelerator
designs, slowing-down the research progress. This work provides a survey of
neural network accelerator optimization approaches that have been used in
recent works and reports their individual effects on edge processing
performance. It presents the list of optimizations and their quantitative
effects as a construction kit, allowing to assess the design choices for each
building block separately. Reported optimizations range from up to 10'000x
memory savings to 33x energy reductions, providing chip designers an overview
of design choices for implementing efficient low power neural network
accelerators
NeuralMatrix: Compute the Entire Neural Networks with Linear Matrix Operations for Efficient Inference
The inherent diversity of computation types within individual deep neural
network (DNN) models necessitates a corresponding variety of computation units
within hardware processors, leading to a significant constraint on computation
efficiency during neural network execution. In this study, we introduce
NeuralMatrix, a framework that transforms the computation of entire DNNs into
linear matrix operations, effectively enabling their execution with one
general-purpose matrix multiplication (GEMM) accelerator. By surmounting the
constraints posed by the diverse computation types required by individual
network models, this approach provides both generality, allowing a wide range
of DNN models to be executed using a single GEMM accelerator and
application-specific acceleration levels without extra special function units,
which are validated through main stream DNNs and their variant models.Comment: 12 pages, 4figures, Submitted to 11th International Conference on
Learning Representation
- …