117 research outputs found

    NTX: An Energy-efficient Streaming Accelerator for Floating-point Generalized Reduction Workloads in 22 nm FD-SOI

    Get PDF
    Specialized coprocessors for Multiply-Accumulate (MAC) intensive workloads such as Deep Learning are becoming widespread in SoC platforms, from GPUs to mobile SoCs. In this paper we revisit NTX (an efficient accelerator developed for training Deep Neural Networks at scale) as a generalized MAC and reduction streaming engine. The architecture consists of a set of 32 bit floating-point streaming co-processors that are loosely coupled to a RISC-V core in charge of orchestrating data movement and computation. Post-layout results of a recent silicon implementation in 22 nm FD-SOI technology show the accelerator\u2019s capability to deliver up to 20 Gflop/s at 1.25 GHz and 168 mW. Based on these results we show that a version of NTX scaled down to 14 nm can achieve a 3 7 energy efficiency improvement over contemporary GPUs at 10.4 7 less silicon area, and a compute performance of 1.4 Tflop/s for training large state-of-the-art networks with full floating-point precision. An extended evaluation of MAC-intensive kernels shows that NTX can consistently achieve up to 87% of its peak performance across general reduction workloads beyond machine learning. Its modular architecture enables deployment at different scales ranging from high-performance GPU-class to low-power embedded scenario

    A Scalable Near-Memory Architecture for Training Deep Neural Networks on Large In-Memory Datasets

    Get PDF
    Most investigations into near-memory hardware accelerators for deep neural networks have primarily focused on inference, while the potential of accelerating training has received relatively little attention so far. Based on an in-depth analysis of the key computational patterns in state-of-the-art gradient-based training methods, we propose an efficient near-memory acceleration engine called NTX that can be used to train state-of-the-art deep convolutional neural networks at scale. Our main contributions are: (i) a loose coupling of RISC-V cores and NTX co-processors reducing offloading overhead by 7 x over previously published results; (ii) an optimized IEEE 754 compliant data path for fast high-precision convolutions and gradient propagation; (iii) evaluation of near-memory computing with NTX embedded into residual area on the Logic Base die of a Hybrid Memory Cube; and (iv) a scaling analysis to meshes of HMCs in a data center scenario. We demonstrate a 2.7 x energy efficiency improvement of NTX over contemporary GPUs at 4.4 x less silicon area, and a compute performance of 1.2 Tflop/s for training large state-of-the-art networks with full floating-point precision. At the data center scale, a mesh of NTX achieves above 95 percent parallel and energy efficiency, while providing 2.1 x energy savings or 3.1 x performance improvement over a GPU-based system

    A 0.80pJ/flop, 1.24Tflop/sW 8-to-64 bit Transprecision Floating-Point Unit for a 64 bit RISC-V Processor in 22nm FD-SOI

    Get PDF
    The crisis of Moore's law and new dominant Machine Learning workloads require a paradigm shift towards finely tunable-precision (a.k.a. transprecision) computing. More specifically, we need floating-point circuits that are capable to operate on many formats with high flexibility. We present the first silicon implementation of a 64-bit transprecision floating-point unit. It fully supports the standard double, single, and half precision, alongside custom bfloat and 8 bit formats. Operations occur on scalars or 2, 4, or 8-way SIMD vectors. We have integrated the 247 kGE unit into a 64 bit application-class RISC-V processor core, where the added transprecision support accounts for an energy and area overhead of merely 11 and 9, respectively; yet achieving speedups and per-datum energy gains of 7.3x and 7.94x. We implemented the design in a 22 nm FD-SOI technology. The unit achieves energy efficiencies between 75 Gflop/sW and 1.24 Tflop/sW, and a performance between 1.85 Gflop/s and 14.83 Gflop/s, across formats

    Ara: A 1 GHz+ Scalable and Energy-Efficient RISC-V Vector Processor with Multi-Precision Floating Point Support in 22 nm FD-SOI

    Full text link
    In this paper, we present Ara, a 64-bit vector processor based on the version 0.5 draft of RISC-V's vector extension, implemented in GlobalFoundries 22FDX FD-SOI technology. Ara's microarchitecture is scalable, as it is composed of a set of identical lanes, each containing part of the processor's vector register file and functional units. It achieves up to 97% FPU utilization when running a 256 x 256 double precision matrix multiplication on sixteen lanes. Ara runs at more than 1 GHz in the typical corner (TT/0.80V/25 oC) achieving a performance up to 33 DP-GFLOPS. In terms of energy efficiency, Ara achieves up to 41 DP-GFLOPS/W under the same conditions, which is slightly superior to similar vector processors found in literature. An analysis on several vectorizable linear algebra computation kernels for a range of different matrix and vector sizes gives insight into performance limitations and bottlenecks for vector processors and outlines directions to maintain high energy efficiency even for small matrix sizes where the vector architecture achieves suboptimal utilization of the available FPUs.Comment: 13 pages. Accepted for publication in IEEE Transactions on Very Large Scale Integration System

    Proinflammatory cytokines in acute myocardial infarction with and without cardiogenic shock

    Get PDF
    Background: Inflammatory response is an important feature of acute coronary syndromes and myocardial infarction (MI). The prognostic value of proinflammatory cytokines in patients with acute MI complicated by cardiogenic shock is unknown. Methods and results: In 41 patients admitted with acute MI (age 60 ± 11 years, six females, 19 Killip class IV) serial plasma concentration of tumor necrosis factor alpha (TNF-α), interleukin 6 (IL-6) and interleukin 1 receptor antagonist (IL-1Ra) were measured. Seven patients with cardiogenic shock (CS) developed a systemic inflammatory response syndrome (SIRS). Patients with CS—particularly those who developed SIRS—showed significantly higher cytokine levels than patients with uncomplicated MI. In patients with CS and SIRS peak levels of IL-1Ra were 223,973 pg/ml, IL-6 252.8 pg/ml and TNF-α 7.0 pg/ml. In CS without SIRS IL-1Ra levels were 19,988 pg/ml, IL-6 109.3 pg/ml and TNF-α 3.8 pg/ml. In uncomplicated MI peak IL-1Ra levels were 1,088 pg/ml, IL-6 34.1 pg/ml and TNF-α 2.6 pg/ml. Conclusions: The inflammation-associated cytokines TNF-α, IL-6 and IL-1Ra are significantly elevated in patients with MI complicated by CS when compared to patients with uncomplicated MI. Among shock-patients IL-1Ra levels are promising diagnostic markers for early identification of patients developing SIRS, heralding a poor outcom
    • …
    corecore