331 research outputs found
A block algorithm for the algebraic path problem and its execution on a systolic array
The solution of the algebraic path problem (APP) for arbitrarily sized graphs by a fixed-size systolic array processor (SAP) is addressed. The APP is decomposed into two subproblems, and SAP is designed for each one. Both SAPs combined produce a highly implementable versatile SAP. The proposed SAP has p*p processing elements (PEs) solving the APP of an N-vertex graph in N/sup 3//p/sup 2/+N/sup 2//p+3p-2 cycles. With slight modifications in the operations performed by the PEs, the problem is optimally solved in N/sup 3//p/sup 2/+3p-2 cycles.Peer ReviewedPostprint (published version
Empowering parallel computing with field programmable gate arrays
After more than 30 years, reconfigurable computing has grown from a concept to a mature field of science and technology. The cornerstone of this evolution is the field programmable gate array, a building block enabling the configuration of a custom hardware architecture. The departure from static von Neumannlike architectures opens the way to eliminate the instruction overhead and to optimize the execution speed and power consumption. FPGAs now live in a growing ecosystem of development tools, enabling software programmers to map algorithms directly onto hardware. Applications abound in many directions, including data centers, IoT, AI, image processing and space exploration. The increasing success of FPGAs is largely due to an improved toolchain with solid high-level synthesis support as well as a better integration with processor and memory systems. On the other hand, long compile times and complex design exploration remain areas for improvement. In this paper we address the evolution of FPGAs towards advanced multi-functional accelerators, discuss different programming models and their HLS language implementations, as well as high-performance tuning of FPGAs integrated into a heterogeneous platform. We pinpoint fallacies and pitfalls, and identify opportunities for language enhancements and architectural refinements
A Construction Kit for Efficient Low Power Neural Network Accelerator Designs
Implementing embedded neural network processing at the edge requires
efficient hardware acceleration that couples high computational performance
with low power consumption. Driven by the rapid evolution of network
architectures and their algorithmic features, accelerator designs are
constantly updated and improved. To evaluate and compare hardware design
choices, designers can refer to a myriad of accelerator implementations in the
literature. Surveys provide an overview of these works but are often limited to
system-level and benchmark-specific performance metrics, making it difficult to
quantitatively compare the individual effect of each utilized optimization
technique. This complicates the evaluation of optimizations for new accelerator
designs, slowing-down the research progress. This work provides a survey of
neural network accelerator optimization approaches that have been used in
recent works and reports their individual effects on edge processing
performance. It presents the list of optimizations and their quantitative
effects as a construction kit, allowing to assess the design choices for each
building block separately. Reported optimizations range from up to 10'000x
memory savings to 33x energy reductions, providing chip designers an overview
of design choices for implementing efficient low power neural network
accelerators
NeuralMatrix: Compute the Entire Neural Networks with Linear Matrix Operations for Efficient Inference
The inherent diversity of computation types within individual deep neural
network (DNN) models necessitates a corresponding variety of computation units
within hardware processors, leading to a significant constraint on computation
efficiency during neural network execution. In this study, we introduce
NeuralMatrix, a framework that transforms the computation of entire DNNs into
linear matrix operations, effectively enabling their execution with one
general-purpose matrix multiplication (GEMM) accelerator. By surmounting the
constraints posed by the diverse computation types required by individual
network models, this approach provides both generality, allowing a wide range
of DNN models to be executed using a single GEMM accelerator and
application-specific acceleration levels without extra special function units,
which are validated through main stream DNNs and their variant models.Comment: 12 pages, 4figures, Submitted to 11th International Conference on
Learning Representation
- …