37 research outputs found

    Energy Optimization of FPGA-Based Stream-Oriented Computing with Power Gating

    Get PDF

    Optimised OpenCL workgroup synthesis for hybrid ARM-FPGA devices

    Get PDF

    FPT: a Fixed-Point Accelerator for Torus Fully Homomorphic Encryption

    Full text link
    Fully Homomorphic Encryption is a technique that allows computation on encrypted data. It has the potential to change privacy considerations in the cloud, but computational and memory overheads are preventing its adoption. TFHE is a promising Torus-based FHE scheme that relies on bootstrapping, the noise-removal tool invoked after each encrypted logical/arithmetical operation. We present FPT, a Fixed-Point FPGA accelerator for TFHE bootstrapping. FPT is the first hardware accelerator to exploit the inherent noise present in FHE calculations. Instead of double or single-precision floating-point arithmetic, it implements TFHE bootstrapping entirely with approximate fixed-point arithmetic. Using an in-depth analysis of noise propagation in bootstrapping FFT computations, FPT is able to use noise-trimmed fixed-point representations that are up to 50% smaller than prior implementations. FPT is built as a streaming processor inspired by traditional streaming DSPs: it instantiates directly cascaded high-throughput computational stages, with minimal control logic and routing networks. We explore throughput-balanced compositions of streaming kernels with a user-configurable streaming width in order to construct a full bootstrapping pipeline. Our approach allows 100% utilization of arithmetic units and requires only a small bootstrapping key cache, enabling an entirely compute-bound bootstrapping throughput of 1 BS / 35us. This is in stark contrast to the classical CPU approach to FHE bootstrapping acceleration, which is typically constrained by memory and bandwidth. FPT is implemented and evaluated as a bootstrapping FPGA kernel for an Alveo U280 datacenter accelerator card. FPT achieves two to three orders of magnitude higher bootstrapping throughput than existing CPU-based implementations, and 2.5x higher throughput compared to recent ASIC emulation experiments.Comment: ACM CCS 202

    Efficient Neural Network Implementations on Parallel Embedded Platforms Applied to Real-Time Torque-Vectoring Optimization Using Predictions for Multi-Motor Electric Vehicles

    Get PDF
    The combination of machine learning and heterogeneous embedded platforms enables new potential for developing sophisticated control concepts which are applicable to the field of vehicle dynamics and ADAS. This interdisciplinary work provides enabler solutions -ultimately implementing fast predictions using neural networks (NNs) on field programmable gate arrays (FPGAs) and graphical processing units (GPUs)- while applying them to a challenging application: Torque Vectoring on a multi-electric-motor vehicle for enhanced vehicle dynamics. The foundation motivating this work is provided by discussing multiple domains of the technological context as well as the constraints related to the automotive field, which contrast with the attractiveness of exploiting the capabilities of new embedded platforms to apply advanced control algorithms for complex control problems. In this particular case we target enhanced vehicle dynamics on a multi-motor electric vehicle benefiting from the greater degrees of freedom and controllability offered by such powertrains. Considering the constraints of the application and the implications of the selected multivariable optimization challenge, we propose a NN to provide batch predictions for real-time optimization. This leads to the major contribution of this work: efficient NN implementations on two intrinsically parallel embedded platforms, a GPU and a FPGA, following an analysis of theoretical and practical implications of their different operating paradigms, in order to efficiently harness their computing potential while gaining insight into their peculiarities. The achieved results exceed the expectations and additionally provide a representative illustration of the strengths and weaknesses of each kind of platform. Consequently, having shown the applicability of the proposed solutions, this work contributes valuable enablers also for further developments following similar fundamental principles.Some of the results presented in this work are related to activities within the 3Ccar project, which has received funding from ECSEL Joint Undertaking under grant agreement No. 662192. This Joint Undertaking received support from the European Union’s Horizon 2020 research and innovation programme and Germany, Austria, Czech Republic, Romania, Belgium, United Kingdom, France, Netherlands, Latvia, Finland, Spain, Italy, Lithuania. This work was also partly supported by the project ENABLES3, which received funding from ECSEL Joint Undertaking under grant agreement No. 692455-2

    FPT: a Fixed-Point Accelerator for Torus Fully Homomorphic Encryption

    Get PDF
    Fully Homomorphic Encryption (FHE) is a technique that allows computation on encrypted data. It has the potential to drastically change privacy considerations in the cloud, but high computational and memory overheads are preventing its broad adoption. TFHE is a promising Torus-based FHE scheme that heavily relies on bootstrapping, the noise-removal tool invoked after each encrypted logical/arithmetical operation. We present FPT, a Fixed-Point FPGA accelerator for TFHE bootstrapping. FPT is the first hardware accelerator to heavily exploit the inherent noise present in FHE calculations. Instead of double or single-precision floating-point arithmetic, it implements TFHE bootstrapping entirely with approximate fixed-point arithmetic. Using an in-depth analysis of noise propagation in bootstrapping FFT computations, FPT is able to use noise-trimmed fixed-point representations that are up to 50% smaller than prior implementations that prefer floating-point or integer FFTs. FPT is built as a streaming processor inspired by traditional streaming DSPs: it instantiates directly cascaded high-throughput computational stages, with minimal control logic and routing networks. We explore different throughput-balanced compositions of streaming kernels with a user-configurable streaming width in order to construct a full bootstrapping pipeline. Our proposed approach allows 100% utilization of arithmetic units and requires only a small bootstrapping key cache, enabling an entirely compute-bound bootstrapping throughput of 1 BS / 35μ\mus. This is in stark contrast to the established classical CPU approach to FHE bootstrapping acceleration, which is typically constrained by memory and bandwidth. FPT is fully implemented and evaluated as a bootstrapping FPGA kernel for an Alveo U280 datacenter accelerator card. FPT achieves two to three orders of magnitude higher bootstrapping throughput than existing CPU-based implementations, and 2.5×\times higher throughput compared to recent ASIC emulation experiments

    Trusted SoC Realization for Remote Dynamic IP Integration

    Get PDF
    Heutzutage bieten field-programmable gate arrays (FPGAs) enorme Rechenleistung und Flexibilität. Zudem sind sie oft auf einem einzigen Chip mit eingebetteten Multicore-Prozessoren, DSP-Engines und Speicher-Controllern integriert. Dadurch sind sie für große und komplexe Anwendungen geeignet. Gleichzeitig führten die Fortschritte auf dem Gebiet der High-Level-Synthese und die Verfügbarkeit standardisierter Schnittstellen (wie etwa das Advanced eXtensible Interface 4) zur Entwicklung spezialisierter und neuartiger Funktionalitäten durch Designhäuser. All dies schuf einen Bedarf für ein Outsourcing der Entwicklung oder die Lizenzierung von FPGA-IPs (Intellectual Property). Ein Pay-per-Use IP-Lizenzierungsmodell, bei dem diese IPs vor allen Marktteilnehmern geschützt sind, kommt den Entwicklern der IPs zugute. Außerdem handelt es sich bei den Entwicklern von FPGA-Systemen in der Regel um kleine bis mittlere Unternehmen, die in Bezug auf die Markteinführungszeit und die Kosten pro Einheit von einem solchen Lizenzierungsmodell profitieren können. Im akademischen Bereich und in der Industrie gibt es mehrere IP-Lizenzierungsmodelle und Schutzlösungen, die eingesetzt werden können, die jedoch mit zahlreichen Sicherheitsproblemen behaftet sind. In einigen Fällen verursachen die vorgeschlagenen Sicherheitsmaßnahmen einen unnötigen Ressourcenaufwand und Einschränkungen für die Systementwickler, d. h., sie können wesentliche Funktionen ihres Geräts nicht nutzen. Darüber hinaus lassen sie zwei funktionale Herausforderungen außer Acht: das Floorplanning der IP auf der programmierbaren Logik (PL) und die Generierung des Endprodukts der IP (Bitstream) unabhängig vom Gesamtdesign. In dieser Arbeit wird ein Pay-per-Use-Lizenzierungsschema vorgeschlagen und unter Verwendung eines security framework (SFW) realisiert, um all diese Herausforderungen anzugehen. Das vorgestellte Schema ist pragmatisch, weniger restriktiv für Systementwickler und bietet Sicherheit gegen IP-Diebstahl. Darüber hinaus werden Maßnahmen ergriffen, um das System vor einem IP zu schützen, das bösartige Schaltkreise enthält. Das „Secure Framework“ umfasst ein vertrauenswürdiges Betriebssystem, ein reichhaltiges Betriebssystem, mehrere unterstützende Komponenten (z. B. TrustZone- Logik, gegen Seitenkanalangriffe (SCA) resistente Entschlüsselungsschaltungen) und Softwarekomponenten, z. B. für die Bitstromanalyse. Ein Gerät, auf dem das SFW läuft, kann als vertrauenswürdiges Gerät betrachtet werden, das direkt mit einem Repository oder einem IP-Core-Entwickler kommunizieren kann, um IPs in verschlüsselter Form zu erwerben. Die Entschlüsselung und Authentifizierung des IPs erfolgt auf dem Gerät, was die Angriffsfläche verringert und es weniger anfällig für IP-Diebstahl macht. Außerdem werden Klartext-IPs in einem geschützten Speicher des vertrauenswürdigen Betriebssystems abgelegt. Das Klartext-IP wird dann analysiert und nur dann auf der programmierbaren Logik konfiguriert, wenn es authentisch ist und keine bösartigen Schaltungen enthält. Die Bitstrom-Analysefunktionalität und die SFW-Unterkomponenten ermöglichen die Partitionierung der PL-Ressourcen in sichere und unsichere Ressourcen, d. h. die Erweiterung desKonzepts der vertrauenswürdigen Ausführungsumgebung (TEE) auf die PL. Dies ist die erste Arbeit, die das TEE-Konzept auf die programmierbare Logik ausweitet. Bei der oben erwähnten SCA-resistenten Entschlüsselungsschaltung handelt es sich um die Implementierung des Advanced Encryption Standard, der so modifiziert wurde, dass er gegen elektromagnetische und stromverbrauchsbedingte Leckagen resistent ist. Das geschützte Design verfügt über zwei Gegenmaßnahmen, wobei die erste auf einer Vielzahl unterschiedler Implementierungsvarianten und veränderlichen Zielpositionen bei der Konfiguration basiert, während die zweite nur unterschiedliche Implementierungsvarianten verwendet. Diese Gegenmaßnahmen sind auch während der Laufzeit skalierbar. Bei der Bewertung werden auch die Auswirkungen der Skalierbarkeit auf den Flächenbedarf und die Sicherheitsstärke berücksichtigt. Darüber hinaus wird die zuvor erwähnte funktionale Herausforderung des IP Floorplanning durch den Vorschlag eines feinkörnigen Automatic Floorplanners angegangen, der auf gemischt-ganzzahliger linearer Programmierung basiert und aktuelle FPGAGenerationen mit größeren und komplexen Bausteine unterstützt. Der Floorplanner bildet eine Reihe von IPs auf dem FPGA ab, indem er präzise rekonfigurierbare Regionen schafft. Dadurch werden die verbleibenden verfügbaren Ressourcen für das Gesamtdesign maximiert. Die zweite funktionale Herausforderung besteht darin, dass die vorhandenen Tools keine native Funktionalität zur Erzeugung von IPs in einer eigenständigen Umgebung bieten. Diese Herausforderung wird durch den Vorschlag eines unabhängigen IP-Generierungsansatzes angegangen. Dieser Ansatz kann von den Marktteilnehmern verwendet werden, um IPs eines Entwurfs unabhängig vom Gesamtentwurf zu generieren, ohne die Kompatibilität der IPs mit dem Gesamtentwurf zu beeinträchtigen

    Advances in ILP-based Modulo Scheduling for High-Level Synthesis

    Get PDF
    In today's heterogenous computing world, field-programmable gate arrays (FPGA) represent the energy-efficient alternative to generic processor cores and graphics accelerators. However, due to their radically different computing model, automatic design methods, such as high-level synthesis (HLS), are needed to harness their full power. HLS raises the abstraction level to behavioural descriptions of algorithms, thus freeing designers from dealing with tedious low-level concerns, and enabling a rapid exploration of different microarchitectures for the same input specification. In an HLS tool, scheduling is the most influential step for the performance of the generated accelerator. Specifically, modulo schedulers enable a pipelined execution, which is a key technique to speed up the computation by extracting more parallelism from the input description. In this thesis, we make a case for the use of integer linear programming (ILP) as a framework for modulo scheduling approaches. First, we argue that ILP-based modulo schedulers are practically usable in the HLS context. Secondly, we show that the ILP framework enables a novel approach for the automatic design of FPGA accelerators. We substantiate the first claim by proposing a new, flexible ILP formulation for the modulo scheduling problem, and evaluate it experimentally with a diverse set of realistic test instances. While solving an ILP may incur an exponential runtime in the worst case, we observe that simple countermeasures, such as setting a time limit, help to contain the practical impact of outlier instances. Furthermore, we present an algorithm to compress problems before the actual scheduling. An HLS-generated microarchitecture is comprised of operators, i.e. single-purpose functional units such as a floating-point multiplier. Usually, the allocation of operators is determined before scheduling, even though both problems are interdependent. To that end, we investigate an extension of the modulo scheduling problem that combines both concerns in a single model. Based on the extension, we present a novel multi-loop scheduling approach capable of finding the fastest microarchitecture that still fits on a given FPGA device - an optimisation problem that current commercial HLS tools cannot solve. This proves our second claim

    Tools for efficient Deep Learning

    Get PDF
    In the era of Deep Learning (DL), there is a fast-growing demand for building and deploying Deep Neural Networks (DNNs) on various platforms. This thesis proposes five tools to address the challenges for designing DNNs that are efficient in time, in resources and in power consumption. We first present Aegis and SPGC to address the challenges in improving the memory efficiency of DL training and inference. Aegis makes mixed precision training (MPT) stabler by layer-wise gradient scaling. Empirical experiments show that Aegis can improve MPT accuracy by at most 4\%. SPGC focuses on structured pruning: replacing standard convolution with group convolution (GConv) to avoid irregular sparsity. SPGC formulates GConv pruning as a channel permutation problem and proposes a novel heuristic polynomial-time algorithm. Common DNNs pruned by SPGC have maximally 1\% higher accuracy than prior work. This thesis also addresses the challenges lying in the gap between DNN descriptions and executables by Polygeist for software and POLSCA for hardware. Many novel techniques, e.g. statement splitting and memory partitioning, are explored and used to expand polyhedral optimisation. Polygeist can speed up software execution in sequential and parallel by 2.53 and 9.47 times on Polybench/C. POLSCA achieves 1.5 times speedup over hardware designs directly generated from high-level synthesis on Polybench/C. Moreover, this thesis presents Deacon, a framework that generates FPGA-based DNN accelerators of streaming architectures with advanced pipelining techniques to address the challenges from heterogeneous convolution and residual connections. Deacon provides fine-grained pipelining, graph-level optimisation, and heuristic exploration by graph colouring. Compared with prior designs, Deacon shows resource/power consumption efficiency improvement of 1.2x/3.5x for MobileNets and 1.0x/2.8x for SqueezeNets. All these tools are open source, some of which have already gained public engagement. We believe they can make efficient deep learning applications easier to build and deploy.Open Acces

    Circuit Design, Architecture and CAD for RRAM-based FPGAs

    Get PDF
    Field Programmable Gate Arrays (FPGAs) have been indispensable components of embedded systems and datacenter infrastructures. However, energy efficiency of FPGAs has become a hard barrier preventing their expansion to more application contexts, due to two physical limitations: (1) The massive usage of routing multiplexers causes delay and power overheads as compared to ASICs. To reduce their power consumption, FPGAs have to operate at low supply voltage but sacrifice performance because the transistors drive degrade when working voltage decreases. (2) Using volatile memory technology forces FPGAs to lose configurations when powered off and to be reconfigured at each power on. Resistive Random Access Memories (RRAMs) have strong potentials in overcoming the physical limitations of conventional FPGAs. First of all, RRAMs grant FPGAs non-volatility, enabling FPGAs to be "Normally powered off, Instantly powered on". Second, by combining functionality of memory and pass-gate logic in one unique device, RRAMs can greatly reduce area and delay of routing elements. Third, when RRAMs are embedded into datpaths, the performance of circuits can be independent from their working voltage, beyond the limitations of CMOS circuits. However, researches and development of RRAM-based FPGAs are in their infancy. Most of area and performance predictions were achieved without solid circuit-level simulations and sophisticated Computer Aided Design (CAD) tools, causing the predicted improvements to be less convincing. In this thesis,we present high-performance and low-power RRAM-based FPGAs fromtransistorlevel circuit designs to architecture-level optimizations and CAD tools, using theoretical analysis, industrial electrical simulators and novel CAD tools. We believe that this is the first systematic study in the field, covering: From a circuit design perspective, we propose efficient RRAM-based programming circuits and routing multiplexers through both theoretical analysis and electrical simulations. The proposed 4T(ransitor)1R(RAM) programming structure demonstrates significant improvements in programming current, when compared to most popular 2T1R programming structure. 4T1R-based routingmultiplexer designs are proposed by considering various physical design parasitics, such as intrinsic capacitance of RRAMs and wells doping organization. The proposed 4T1R-based multiplexers outperformbest CMOS implementations significantly in area, delay and power at both nominal and near-Vt regime. From a CAD perspective, we develop a generic FPGA architecture exploration tool, FPGASPICE, modeling a full FPGA fabric with SPICE and Verilog netlists. FPGA-SPICE provides different levels of testbenches and techniques to split large SPICE netlists, in order to obtain better trade-off between simulation time and accuracy. FPGA-SPICE can capture area and power characteristics of SRAM-based and RRAM-based FPGAs more accurately than the currently best analyticalmodels. From an architecture perspective, we propose architecture-level optimizations for RRAMbased FPGAs and quantify their minimumrequirements for RRAM devices. Compared to the best SRAM-based FPGAs, an optimized RRAM-based FPGA architecture brings significant reduction in area, delay and power respectively. In particular, RRAM-based FPGAs operating in the near-Vt regime demonstrate a 5x power improvement without delay overhead as compared to optimized SRAM-based FPGA operating at nominal working voltage
    corecore