1 research outputs found

    Exploiting Fine-Grain Ordered Parallelism in Dense Matrix Algorithms

    Full text link
    Dense linear algebra kernels are critical for wireless applications, and the oncoming proliferation of 5G only amplifies their importance. Many such matrix algorithms are inductive, and exhibit ample amounts of fine-grain ordered parallelism -- when multiple computations flow with fine-grain producer/consumer dependences, and where the iteration domain is not easily tileable. Synchronization overheads make multi-core parallelism ineffective and the non-tileable iterations make the vector-VLIW approach less effective, especially for the typically modest-sized matrices. Because CPUs and DSPs lose order-of-magnitude performance/hardware utilization, costly and inflexible ASICs are often employed in signal processing pipelines. A programmable accelerator with similar performance/power/area would be highly desirable. We find that fine-grain ordered parallelism can be exploited by supporting: 1. fine-grain stream-based communication/synchronization; 2. inductive data-reuse and memory access patterns; 3. implicit vector-masking for partial vectors; 4. hardware specialization of dataflow criticality. In this work, we propose, REVEL, as a next-generation DSP architecture. It supports the above features in its ISA and microarchitecture, and further uses a novel vector-stream control paradigm to reduce control overheads. Across a suite of linear algebra kernels, REVEL outperforms equally provisioned DSPs by 4.6x-37x in latency and achieves a performance per mm 2 of 8.3x. It is only 2.2x higher power to achieve the same performance as ideal ASICs, at about 55% of the combined area
    corecore