2 research outputs found

    Automatic Creation of High-Bandwidth Memory Architectures from Domain-Specific Languages: The Case of Computational Fluid Dynamics

    Get PDF
    Numerical simulations can help solve complex problems. Most of these algorithms are massively parallel and thus good candidates for FPGA acceleration thanks to spatial parallelism. Modern FPGA devices can leverage high-bandwidth memory technologies, but when applications are memory-bound designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. This development process requires hardware design skills that are uncommon in domain-specific experts. In this paper, we propose an automated tool flow from a domain-specific language (DSL) for tensor expressions to generate massively-parallel accelerators on HBM-equipped FPGAs. Designers can use this flow to integrate and evaluate various compiler or hardware optimizations. We use computational fluid dynamics (CFD) as a paradigmatic example. Our flow starts from the high-level specification of tensor operations and combines an MLIR-based compiler with an in-house hardware generation flow to generate systems with parallel accelerators and a specialized memory architecture that moves data efficiently, aiming at fully exploiting the available CPU-FPGA bandwidth. We simulated applications with millions of elements, achieving up to 103 GFLOPS with one compute unit and custom precision when targeting a Xilinx Alveo U280. Our FPGA implementation is up to 25x more energy efficient than expert-crafted Intel CPU implementations

    Meta-programming for cross-domain tensor optimizations

    No full text
    International audienceMany modern application domains crucially rely on tensor operations. The optimization of programs that operate ontensors poses difficulties that are not adequately addressed by existing languages and tools. Frameworks such as TensorFlow offer good abstractions for tensor operations, but target a specific domain, i.e. machine learning, and theiroptimization strategies cannot easily be adjusted to other domains. General-purpose optimization tools such as Pluto andexisting meta-languages offer more flexibility in applying optimizations but lack abstractions for tensors. This workcloses the gap between domain-specific tensor languages and general-purpose optimization tools by proposing theTensor optimizations Meta-Language (TeML). TeML offers high-level abstractions for both tensor operations and looptransformations, and enables flexible composition of transformations into effective optimization paths. This compositionality is built into TeML’s design, as our formal language specification will reveal. We also show that TeML can expresstensor computations as comfortably as TensorFlow and that it can reproduce Pluto’s optimization paths. Thus,optimized programs generated by TeML execute at least as fast as the corresponding Pluto programs. In addition, TeMLenables optimization paths that often allow outperforming Pluto
    corecore