9 research outputs found

    Compiler and Architecture Design for Coarse-Grained Programmable Accelerators

    Get PDF
    abstract: The holy grail of computer hardware across all market segments has been to sustain performance improvement at the same pace as silicon technology scales. As the technology scales and the size of transistors shrinks, the power consumption and energy usage per transistor decrease. On the other hand, the transistor density increases significantly by technology scaling. Due to technology factors, the reduction in power consumption per transistor is not sufficient to offset the increase in power consumption per unit area. Therefore, to improve performance, increasing energy-efficiency must be addressed at all design levels from circuit level to application and algorithm levels. At architectural level, one promising approach is to populate the system with hardware accelerators each optimized for a specific task. One drawback of hardware accelerators is that they are not programmable. Therefore, their utilization can be low as they perform one specific function. Using software programmable accelerators is an alternative approach to achieve high energy-efficiency and programmability. Due to intrinsic characteristics of software accelerators, they can exploit both instruction level parallelism and data level parallelism. Coarse-Grained Reconfigurable Architecture (CGRA) is a software programmable accelerator consists of a number of word-level functional units. Motivated by promising characteristics of software programmable accelerators, the potentials of CGRAs in future computing platforms is studied and an end-to-end CGRA research framework is developed. This framework consists of three different aspects: CGRA architectural design, integration in a computing system, and CGRA compiler. First, the design and implementation of a CGRA and its instruction set is presented. This design is then modeled in a cycle accurate system simulator. The simulation platform enables us to investigate several problems associated with a CGRA when it is deployed as an accelerator in a computing system. Next, the problem of mapping a compute intensive region of a program to CGRAs is formulated. From this formulation, several efficient algorithms are developed which effectively utilize CGRA scarce resources very well to minimize the running time of input applications. Finally, these mapping algorithms are integrated in a compiler framework to construct a compiler for CGRADissertation/ThesisDoctoral Dissertation Computer Science 201

    ์žฌ๊ตฌ์„ฑํ˜• ๊ตฌ์กฐ์—์„œ์˜ ํšจ์œจ์ ์ธ ์กฐ๊ฑด์‹คํ–‰ ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2013. 8. ์ตœ๊ธฐ์˜.์žฌ๊ตฌ์„ฑํ˜• ๊ตฌ์กฐ๋Š” ์—ฐ์‚ฐ๋Ÿ‰์ด ๋งŽ์€ ํ”„๋กœ๊ทธ๋žจ์„ ๋‚ด์žฅํ˜• ์‹œ์Šคํ…œ์—์„œ ๊ฐ€์†์‹œํ‚ค๋Š” ๋ฐ ์ ํ•ฉํ•œ ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ์ด๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋งŽ์€ ์—ฐ์‚ฐ์œ ๋‹›๋“ค๊ณผ ํ•˜๋‚˜์˜ ์ปจํŠธ๋กค๋Ÿฌ๋กœ ๊ตฌ์„ฑ๋˜์–ด ๊ณ ์„ฑ๋Šฅ, ์œ ์—ฐ์„ฑ, ์ €์ „๋ ฅ์„ ๋™์‹œ์— ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด์ค€๋‹ค. ๋งŽ์€ ์—ฐ์‚ฐ์œ ๋‹›์„ ๋ฐ”ํƒ•์œผ๋กœ ํ•œ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ๋Š” ์‘์šฉํ”„๋กœ๊ทธ๋žจ์˜ ์‹คํ–‰์†๋„๋ฅผ ๋น ๋ฅด๊ฒŒ ํ•˜๋ฉฐ, ์žฌ๊ตฌ์„ฑ ๊ธฐ๋Šฅ์€ ๋‹ค์–‘ํ•œ ์‘์šฉํ”„๋กœ๊ทธ๋žจ์—์˜ ํ™œ์šฉ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•ด์ค€๋‹ค. ๋˜ํ•œ, ๋ช…๋ น์–ด์™€ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์Šค์ผ€์ฅด์„ ๋ฏธ๋ฆฌ ์ •ํ•ด๋†“์Œ์œผ๋กœ์จ ์ œ์–ด๊ตฌ์กฐ๋ฅผ ๋‹จ์ˆœํ™”์‹œํ‚ฌ ์ˆ˜ ์žˆ์œผ๋ฉฐ ์ด๋Š” ์—ฐ์‚ฐ๋Ÿ‰ ๋Œ€๋น„ ์ „๋ ฅ์†Œ๋ชจ๋ฅผ ์ตœ์†Œํ•œ์œผ ๋กœ ์ค„์—ฌ์ค€๋‹ค. ํ•˜์ง€๋งŒ ์‘์šฉํ”„๋กœ๊ทธ๋žจ์ด ๋ณต์žกํ•ด์ง์— ๋”ฐ๋ผ ์—ฐ์‚ฐ๋Ÿ‰์ด ๋งŽ์€ ๋ถ€๋ถ„๋“ค์— ๋ถ„๊ธฐ๋ฌธ์ด ์ƒ๊ธฐ๊ฒŒ ๋˜์—ˆ์œผ๋ฉฐ ์ด๋Š” ์žฌ๊ตฌ์„ฑํ˜• ๊ตฌ์กฐ๋ฅผ ์‚ฌ์šฉํ•จ์— ์žˆ์–ด ํฐ ์œ„ํ˜‘์ด ๋˜๊ณ  ์žˆ๋‹ค. ๋ถ„๊ธฐ๋ฌธ์„ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ์ปจํŠธ๋กค๋Ÿฌ๊ฐ€ ํ•˜๋‚˜์ด๊ธฐ ๋•Œ๋ฌธ์— ์ปจํŠธ๋กค๋Ÿฌ์— ๋ณ‘๋ชฉํ˜„์ƒ์ด ๋ฐœ์ƒํ•˜๊ฑฐ๋‚˜ ๋™์‹œ์— ์„œ๋กœ ๋‹ค๋ฅธ ์ œ์–ด๋ฅผ ์š”๊ตฌํ•˜๊ฒŒ ๋˜๋ฉด ํ•ด๋‹น ํ”„๋กœ๊ทธ๋žจ์€ ๊ฐ€์†์ด ๋ถˆ๊ฐ€๋Šฅํ•ด์ง„๋‹ค. ์กฐ๊ฑด์‹คํ–‰์ด๋ผ๋Š” ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ ์ด๋ฅผ ๋ถ€๋ถ„์ ์œผ๋กœ ํ•ด์†Œํ•  ์ˆ˜ ์žˆ์ง€๋งŒ ๊ธฐ์กด์— ๊ฐœ๋ฐœ๋˜์–ด ์žˆ๋Š” ์กฐ๊ฑด์‹คํ–‰ ๊ธฐ์ˆ ๋“ค์€ ์žฌ๊ตฌ์„ฑํ˜• ๊ตฌ์กฐ์— ์„ฑ๋Šฅ ๋ฐ ์ „๋ ฅ์†Œ๋ชจ ๋ฉด์—์„œ ๋ถ€์ •์ ์ธ ์˜ํ–ฅ์„ ๋ผ์นœ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์—ฐ์‚ฐ๋Ÿ‰์ด ๋งŽ์ง€๋งŒ ๋ถ„๊ธฐ๋ฌธ์„ ๊ฐ€์ง„ ์‘์šฉํ”„๋กœ๊ทธ๋žจ์—์„œ ์กฐ๊ฑด์‹คํ–‰์ด ์„ฑ๋Šฅ๊ณผ ์ „๋ ฅ ๋ฉด์—์„œ ์–ด๋– ํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ๋ฐํžˆ๋ฉฐ ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๊ณ ์„ฑ๋Šฅ๊ณผ ์ €์ „๋ ฅ์„ ๊ฐ€์ง„ ์กฐ๊ฑด์‹คํ–‰ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ์— ๋”ฐ๋ฅด๋ฉด ์ œ์•ˆํ•œ ๋ฐฉ์‹์€ ๊ธฐ์กด์˜ ์„ธ๊ฐ€์ง€ ๋ฐฉ์‹๋ณด๋‹ค ์„ฑ๋Šฅ๊ณผ ์ „๋ ฅ์†Œ๋ชจ๋ฅผ ๊ณฑ์œผ๋กœ ํ‘œํ˜„ํ•œ ์ˆ˜์น˜์— ์žˆ์–ด์„œ 11.9%, 14.7%, 23.8% ๋งŒํผ์˜ ์ด๋“์„ ๋ณด์˜€๋‹ค. ๋˜ํ•œ, ์ œ์•ˆํ•œ ์กฐ๊ฑด์‹คํ–‰ ๋ฐฉ๋ฒ•์— ์ ํ•ฉํ•œ ์ปดํŒŒ์ผ ์ฒด๊ณ„๋„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ์กฐ๊ฑด์‹คํ–‰์€ ์ ˆ์ „๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•จ์— ๋”ฐ๋ผ ์ „๋ ฅ์„ ์•„๋‚„ ์ˆ˜ ์žˆ์ง€๋งŒ ๊ธฐ์กด์˜ ์ปดํŒŒ์ผ๋ฐฉ์‹์œผ๋กœ๋Š” ์—ฌ๋Ÿฌ ์กฐ๊ฑด๋ฌธ์„ ๋ณ‘๋ ฌ์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•˜๋„๋ก ์ปดํŒŒ์ผํ•  ์ˆ˜ ์—†๋Š” ๋ฌธ์ œ๊ฐ€ ์ƒ๊ธด๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋Ÿฐ ๋ฌธ์ œ๋ฅผ ๋ฐํžˆ๊ณ  ์กฐ๊ฑด๋ฌธ๋“ค์„ ์„œ๋กœ ๋‹ค๋ฅธ ์—ฐ์‚ฐ์œ ๋‹›์— ํ• ๋‹นํ•จ์œผ๋กœ์จ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ์‹์„ ์ œ์•ˆํ•˜๊ณ  ์žˆ๋‹ค. ์ œ์•ˆํ•œ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ ๋‹จ์ˆœํ•˜๊ณ  ์ง๊ด€์ ์ธ ๋ฐฉ๋ฒ•์— ๋น„ํ•˜์—ฌ ํ‰๊ท ์ ์œผ๋กœ 2.21๋ฐฐ์˜ ๋†’์€ ์„ฑ๋Šฅ์„ ์–ป์„ ์ˆ˜ ์žˆ์—ˆ๋‹ค.Coarse-Grained Reconfigurable Architecture (CGRA) is one of viable solutions in embedded systems to accelerate data-intensive applications. It typically consists of an array of processing elements (PEs) and a centralized controller, which can provide high performance, flexibility, and low power. Parallel array processing reduces execution time of applications, reconfigurability of PEs allows changing its functionality, and simplified control structure with static scheduling for instruction fetching and data communication minimizes power consumption. However, as applications become complex so that data-intensive parts are having control flows in them, CGRAs face a challenge for its effectiveness. Since the entire PEs are controlled by a centralized unit, it is impossible to execute programs having control divergence among PEs. To overcome the problem, we can adopt the technique called predicated execution, which is the unique solution known so far, but conventional predication techniques have a negative impact on both performance and power consumption due to longer instruction words and unnecessary instruction-fetching/decoding/nullifying steps. Thus, this thesis reveals performance and power issues in predicated execution when a CGRA executes both data- and control-intensive applications, which have not been well-addressed yet. Then it proposes high-performance and low-power predication mechanisms. Experiments conducted through gate-level simulation show that the proposed mechanism improves energy-delay product by 11.9%, 14.7%, and 23.8% compared to three conventional techniques. In addition, this thesis also reveals mapping issues when mapping applications on CGRAs using the proposed predication. A power-saving mode introduced into PEs prohibits multiple conditionals from being parallelized if conventional mapping algorithms are used. Thus, this thesis proposes the framework to release this problem by mapping conditionals to different PEs. Experiments show that mapping results from the proposed approach lead to 2.21 times higher performance than those of the naรฏve approach.Abstract i Chapter 1 Introduction 1 Chapter 2 Background and Related Work 5 2.1 Coarse-Grained Reconfigurable Architecture . . . . . . . . . . . . 5 2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 Target Domain . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.3 Comparison with Other Architectures . . . . . . . . . . . 6 2.1.4 Application Mapping . . . . . . . . . . . . . . . . . . . . . 8 2.1.5 Target CGRA . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Predicated Execution Technique . . . . . . . . . . . . . . . . . . 11 2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.3 Different Roles in ILP and DLP processors . . . . . . . . 13 2.2.4 Predication Support on CGRAs . . . . . . . . . . . . . . . 14 Chapter 3 Conventional Predicated Execution Techniques 15 3.1 Partial Predication (Partial) . . . . . . . . . . . . . . . . . . . . 16 3.2 Condition-Based Full Predication (CondFull) . . . . . . . . . . 18 Chapter 4 State-Based Full Predication 23 4.1 Previous Approach (PseudoBranch) . . . . . . . . . . . . . . . 24 4.2 Counter-Based Approach (StateFull) . . . . . . . . . . . . . . 25 4.3 Dual-Issue-Single-Execution (DISE) . . . . . . . . . . . . . . . . 28 4.4 Hybrid Predication . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.4.2 StateFull+Partial . . . . . . . . . . . . . . . . . . . . 34 4.4.3 StateFull+Partial+DISE . . . . . . . . . . . . . . . . 35 Chapter 5 Evaluation 39 5.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.1.1 Conventional Techniques . . . . . . . . . . . . . . . . . . . 39 5.1.2 Proposed Techniques . . . . . . . . . . . . . . . . . . . . . 40 5.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.3.1 Effect of Predication Mechanism on Power Consumption of a PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.3.2 Quantitative Definitions of short-if and long-if . . . . . . 48 5.3.3 Compilation Strategy in StateFull+Partial . . . . . . 48 5.3.4 Conventional Techniques (Partial, CondFull, and PseudoBranch) vs. Proposed StateFull Technique . . . . . 49 5.3.5 Proposed Hybrid Predication Techniques . . . . . . . . . 53 5.3.6 Putting Together . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.7 Speedup of Applications . . . . . . . . . . . . . . . . . . . 57 Chapter 6 Mapping Framework 61 6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 6.2 Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 63 6.2.1 Overall Flow . . . . . . . . . . . . . . . . . . . . . . . . . 63 6.2.2 From IR to CDFG . . . . . . . . . . . . . . . . . . . . . . 64 6.2.3 Separation . . . . . . . . . . . . . . . . . . . . . . . . . . 65 6.2.4 CDFG Mapping . . . . . . . . . . . . . . . . . . . . . . . 68 6.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 6.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 6.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . 69 6.4.2 Verification of Mapping Framework . . . . . . . . . . . . . 70 6.4.3 Quality of Mapping Results . . . . . . . . . . . . . . . . . 70 Chapter 7 Conclusion 73 7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 7.2 Applicable Scope and Future Work . . . . . . . . . . . . . . . . . 75 Appendix 77 ๊ตญ๋ฌธ์ดˆ๋ก 93 ๊ฐ์‚ฌ์˜ ๊ธ€ 95Docto

    Scalable Register File Architecture for CGRA Accelerators

    Get PDF
    abstract: Coarse-grained Reconfigurable Arrays (CGRAs) are promising accelerators capable of accelerating even non-parallel loops and loops with low trip-counts. One challenge in compiling for CGRAs is to manage both recurring and nonrecurring variables in the register file (RF) of the CGRA. Although prior works have managed recurring variables via rotating RF, they access the nonrecurring variables through either a global RF or from a constant memory. The former does not scale well, and the latter degrades the mapping quality. This work proposes a hardware-software codesign approach in order to manage all the variables in a local nonrotating RF. Hardware provides modulo addition based indexing mechanism to enable correct addressing of recurring variables in a nonrotating RF. The compiler determines the number of registers required for each recurring variable and configures the boundary between the registers used for recurring and nonrecurring variables. The compiler also pre-loads the read-only variables and constants into the local registers in the prologue of the schedule. Synthesis and place-and-route results of the previous and the proposed RF design show that proposed solution achieves 17% better cycle time. Experiments of mapping several important and performance-critical loops collected from MiBench show proposed approach improves performance (through better mapping) by 18%, compared to using constant memory.Dissertation/ThesisMasters Thesis Computer Science 201

    Application-Level Performance Improvement for Stream Program on CGRA-based systems

    Get PDF
    Department of Computer EngineeringCoarse-Grained Reconfigurable Architectures (CGRAs), often used as coprocessors for DSP and multimedia kernels, can deliver highly energy-effcient execution for compute-intensive kernels. Simultaneously, stream applications, which consist of many actors and channels connecting them, can provide natural representations for DSP applications, and therefore be a good match for CGRAs. We present our results of mapping DSP applications written in StreamIt language to CGRAs, along with our mapping flow. One important challenge in mapping is how to manage the multitude of kernels in the application for the limited local memory of a CGRA, for which we present a novel integer linear programming-based solution. Our evaluation results demonstrate that our software and hardware optimizations can help generate highly effcient mapping of stream applications to CGRAs, enabling far more energy-effcient executions (7x worse to 50x better) compared to using state-of-theart GP-GPUs. Further, we eliminate communication overhead and reduce computation overhead using combination of sychronous/asynchronous processors and DMA. This optimization also improve performance by 17.1% on average comparing to baseline system.ope

    ACiS: smart switches with application-level acceleration

    Full text link
    Network performance has contributed fundamentally to the growth of supercomputing over the past decades. In parallel, High Performance Computing (HPC) peak performance has depended, first, on ever faster/denser CPUs, and then, just on increasing density alone. As operating frequency, and now feature size, have levelled off, two new approaches are becoming central to achieving higher net performance: configurability and integration. Configurability enables hardware to map to the application, as well as vice versa. Integration enables system components that have generally been single function-e.g., a network to transport dataโ€”to have additional functionality, e.g., also to operate on that data. More generally, integration enables compute-everywhere: not just in CPU and accelerator, but also in network and, more specifically, the communication switches. In this thesis, we propose four novel methods of enhancing HPC performance through Advanced Computing in the Switch (ACiS). More specifically, we propose various flexible and application-aware accelerators that can be embedded into or attached to existing communication switches to improve the performance and scalability of HPC and Machine Learning (ML) applications. We follow a modular design discipline through introducing composable plugins to successively add ACiS capabilities. In the first work, we propose an inline accelerator to communication switches for user-definable collective operations. MPI collective operations can often be performance killers in HPC applications; we seek to solve this bottleneck by offloading them to reconfigurable hardware within the switch itself. We also introduce a novel mechanism that enables the hardware to support MPI communicators of arbitrary shape and that is scalable to very large systems. In the second work, we propose a look-aside accelerator for communication switches that is capable of processing packets at line-rate. Functions requiring loops and states are addressed in this method. The proposed in-switch accelerator is based on a RISC-V compatible Coarse Grained Reconfigurable Arrays (CGRAs). To facilitate usability, we have developed a framework to compile user-provided C/C++ codes to appropriate back-end instructions for configuring the accelerator. In the third work, we extend ACiS to support fused collectives and the combining of collectives with map operations. We observe that there is an opportunity of fusing communication (collectives) with computation. Since the computation can vary for different applications, ACiS support should be programmable in this method. In the fourth work, we propose that switches with ACiS support can control and manage the execution of applications, i.e., that the switch be an active device with decision-making capabilities. Switches have a central view of the network; they can collect telemetry information and monitor application behavior and then use this information for control, decision-making, and coordination of nodes. We evaluate the feasibility of ACiS through extensive RTL-based simulation as well as deployment in an open-access cloud infrastructure. Using this simulation framework, when considering a Graph Convolutional Network (GCN) application as a case study, a speedup of on average 3.4x across five real-world datasets is achieved on 24 nodes compared to a CPU cluster without ACiS capabilities

    From constraint programming to heterogeneous parallelism

    Get PDF
    The scaling limitations of multi-core processor development have led to a diversification of the processor cores used within individual computers. Heterogeneous computing has become widespread, involving the cooperation of several structurally different processor cores. Central processor (CPU) cores are most frequently complemented with graphics processors (GPUs), which despite their name are suitable for many highly parallel computations besides computer graphics. Furthermore, deep learning accelerators are rapidly gaining relevance. Many applications could profit from heterogeneous computing but are held back by the surrounding software ecosystems. Heterogeneous systems are a challenge for compilers in particular, which usually target only the increasingly marginalised homogeneous CPU cores. Therefore, heterogeneous acceleration is primarily accessible via libraries and domain-specific languages (DSLs), requiring application rewrites and resulting in vendor lock-in. This thesis presents a compiler method for automatically targeting heterogeneous hardware from existing sequential C/C++ source code. A new constraint programming method enables the declarative specification and automatic detection of computational idioms within compiler intermediate representation code. Examples of computational idioms are stencils, reductions, and linear algebra. Computational idioms denote algorithmic structures that commonly occur in performance-critical loops. Consequently, well-designed accelerator DSLs and libraries support computational idioms with their programming models and function interfaces. The detection of computational idioms in their middle end enables compilers to incorporate DSL and library backends for code generation. These backends leverage domain knowledge for the efficient utilisation of heterogeneous hardware. The constraint programming methodology is first derived on an abstract model and then implemented as an extension to LLVM. Two constraint programming languages are designed to target this implementation: the Compiler Analysis Description Language (CAnDL), and the extended Idiom Detection Language (IDL). These languages are evaluated on a range of different compiler problems, culminating in a complete heterogeneous acceleration pipeline integrated with the Clang C/C++ compiler. This pipeline was evaluated on the established benchmark collections NPB and Parboil. The approach was applicable to 10 of the benchmark programs, resulting in significant speedups from 1.26ร— on โ€œhistoโ€ to 275ร— on โ€œsgemmโ€ when starting from sequential baseline versions. In summary, this thesis shows that the automatic recognition of computational idioms during compilation enables the heterogeneous acceleration of sequential C/C++ programs. Moreover, the declarative specification of computational idioms is derived in novel declarative programming languages, and it is demonstrated that constraint programming on Single Static Assignment intermediate code is a suitable method for their automatic detection

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design โ€“ FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Path Selection Based Acceleration of Conditionals in CGRAs

    No full text
    corecore