416 research outputs found

    Design of Special Function Units in Modern Microprocessors

    Get PDF
    Today’s computing systems demand high performance for applications such as cloud computing, web-based search engines, network applications, and social media tasks. Such software applications involve an extensive use of hashing and arithmetic operations in their computation. In this thesis, we explore the use of new special function units (SFUs) for modern microprocessors, to accelerate such workloads. First, we design an SFU for hashing. Hashing can reduce the complexity of search and lookup from O(p) to O(p/n), where n bins are used and p items are being processed. In modern microprocessors, hashing is done in software. In our work, we propose a novel hardware hash unit design for use in modern microprocessors. Since the hash unit is designed at the hardware level, several advantages are obtained by our approach. First, a hardware-based hash unit executes a single hash instruction to perform a hash operation. In a software-based hashing in modern microprocessors, a hash operation is compiled into multiple instructions, thereby degrading performance. Second, software-based hashing stores hash data in a DRAM (also, hash operation entries can be stored in one of the cache levels). In a hardware-based hash unit, hash data is stored in a dedicated memory module (a hardware hash table), which improves performance. Third, today’s operating systems execute multiple applications (processes) in parallel, which entail high memory utilization. Hence the operating systems require many context switching between different processes, which results in many cache misses. In a hardware-based hash unit, the cache misses is reduced significantly using the dedicated memory module (hash table). These advantages all reduce the power consumption and increase the overall system performance significantly with a minimal increase in the microprocessor’s die area. We evaluate our hardware-based hash unit and compare its performance with software-based hashing. We start by evaluating our design approach at the micro-architecture level in terms of system performance. After that, we design our approach at the circuit level design to obtain the area overhead. Also, we analyze our design’s power and delay for each hash operation. These results are compared with a traditional hashing implementation. Then, we present an FPGA-based coprocessor for hash unit acceleration, applied to a virus checking application. Second, we present an SFU to speed up arithmetic operations. We call this arithmetic SFU a programmable arithmetic unit (PAU). In modern microprocessors, applications that require heavy arithmetic computations are done in software. To improve the performance for such computations, we present a programmable arithmetic unit (PAU), a partially reconfigurable methodology for arithmetic applications. The PAU consists of a set of IP blocks connected to a reconfigurable FPGA controller via a fast mesh-based interconnect. The IP blocks in the PAU can be any IP block such as adders, subtractors, multipliers, comparators and sign extension units. The PAU can have one or more copies of the same IP block (for example, 5 adders and 7 multipliers). The FPGA controller is an on-chip FPGA-based reconfigurable control fabric. The FPGA controller enables different arithmetic applications to be embedded on the PAU. The FPGA controller is programmed for different applications. The reconfigurable logic is based on a LUT-based design like a traditional FPGA. The FPGA controller and the IP blocks in the PAU communicate via a high speed ring data fabric. In our work, we use the PAU as an SFU in modern microprocessors. We compare the performance of different hardware-based arithmetic applications in the PAU with software-based implementations in modern microprocessors

    Towards Terabit Carrier Ethernet and Energy Efficient Optical Transport Networks

    Get PDF

    DRAM Bender: An Extensible and Versatile FPGA-based Infrastructure to Easily Test State-of-the-art DRAM Chips

    Full text link
    To understand and improve DRAM performance, reliability, security and energy efficiency, prior works study characteristics of commodity DRAM chips. Unfortunately, state-of-the-art open source infrastructures capable of conducting such studies are obsolete, poorly supported, or difficult to use, or their inflexibility limit the types of studies they can conduct. We propose DRAM Bender, a new FPGA-based infrastructure that enables experimental studies on state-of-the-art DRAM chips. DRAM Bender offers three key features at the same time. First, DRAM Bender enables directly interfacing with a DRAM chip through its low-level interface. This allows users to issue DRAM commands in arbitrary order and with finer-grained time intervals compared to other open source infrastructures. Second, DRAM Bender exposes easy-to-use C++ and Python programming interfaces, allowing users to quickly and easily develop different types of DRAM experiments. Third, DRAM Bender is easily extensible. The modular design of DRAM Bender allows extending it to (i) support existing and emerging DRAM interfaces, and (ii) run on new commercial or custom FPGA boards with little effort. To demonstrate that DRAM Bender is a versatile infrastructure, we conduct three case studies, two of which lead to new observations about the DRAM RowHammer vulnerability. In particular, we show that data patterns supported by DRAM Bender uncovers a larger set of bit-flips on a victim row compared to the data patterns commonly used by prior work. We demonstrate the extensibility of DRAM Bender by implementing it on five different FPGAs with DDR4 and DDR3 support. DRAM Bender is freely and openly available at https://github.com/CMU-SAFARI/DRAM-Bender.Comment: To appear in TCAD 202

    ビットマップインデックスに基づくデータ解析のためのハードウェアシステムに関する研究

    Get PDF
    Recent years have witnessed a massive growth of global data generated from web services, social media networks, and science experiments, as well as the  “tsunami" of Internet-of-Things devices. According to a Cisco forecast, total data center traffic is projected to hit 15.3 zettabytes (ZB) by the end of 2020. Gaining insight into a vast amount of data is highly important because valuable data are the driving force for business decisions and processes, as well as scientists\u27 exploration and discovery.To facilitate analytics, data are usually indexed in advance. Depending on the workloads, such as online transaction processing (OLTP) workloads and online analytics processing (OLAP) workloads, several indexing frameworks have been proposed. Specifically, B+-tree and hash are two common indexing methods in OLTP, where the number of querying and updating processes are nearly similar. Unlike OLTP, OLAP concentrates on querying in a huge historical storage, where updating processes are irregular. Most queries in OLAP are also highly complex and involve aggregations, while the execution time is often limited. To address these challenges, a bitmap index (BI) was proposed and has been proven as a promising candidate for OLAP-like workloads.A BI is a bit-level matrix, whose number of rows and columns are the length and cardinality of the datasets, respectively. With a BI, answering multi-dimensional queries becomes a series of bitwise operators, e.g. AND, OR, XOR, and NOT, on bit columns. As a result, a BI has proven profitable for solving complex queries in large enterprise databases and scientific databases. More significantly, because of the usage of low-hardware logical operators, a BI appears to be suitable for advanced parallel-processing platforms, such as multi-core CPUs, graphics processing units (GPUs), field-programmable logic arrays (FPGAs), and application-specific integrated circuits (ASIC).Modern FPGAs and ASICs have become increasingly important in data analytics because they can confront both data-intensive and computing-intensive tasks effectively. Furthermore, FPGAs and ASICs can provide higher energy efficiency, compared to CPUs and GPUs. As a result, since 2010, Microsoft has been working on the so-called Catapult project, where FPGAs were integrated into datacenter servers to accelerate their search engine as well as AI applications. In 2016, Oracle for the first time introduced SPARC S7 and M7 processors that are used for accelerating the OLTP databases. Nonetheless, a study on the feasibility of BI-based analytics systems using FPGAs and ASICs has not yet been developed.This dissertation, therefore, focuses on implementing the data analytics systems, in both FPGAs and ASICs, using BI. The advantages of the proposed systems include scalability, low data input/output cost, high processing throughput, and high energy efficiency. Three main modules are proposed: (1) a BI creator that indexes the given records by a list of keys and outputs the BI vectors to the external memory; (2) a BI-based query processor that employs the given BI vectors to answer users\u27 queries and outputs the results to the external memory; and (3) an BI encoder that returns the positions of one-bits of bitmap results to the external memory. Six hardware systems based on those three modules are implemented in an FPGA in advance for functional verification and then partially in two ASICs|180-nm bulk complementary metal-oxide-semiconductor (CMOS) and 65-nm Silicon-On-Thin-Buried-Oxide (SOTB) CMOS technology―for physical design verification. Based on the experimental results, these proposed systems outperform other CPU-based and GPU-based designs, especially in terms of energy efficiency.電気通信大学201

    A real-time deep learning OFDM receiver

    Get PDF

    Dataplane Specialization for High-performance OpenFlow Software Switching

    Get PDF
    OpenFlow is an amazingly expressive dataplane program- ming language, but this expressiveness comes at a severe performance price as switches must do excessive packet clas- sification in the fast path. The prevalent OpenFlow software switch architecture is therefore built on flow caching, but this imposes intricate limitations on the workloads that can be supported efficiently and may even open the door to mali- cious cache overflow attacks. In this paper we argue that in- stead of enforcing the same universal flow cache semantics to all OpenFlow applications and optimize for the common case, a switch should rather automatically specialize its dat- aplane piecemeal with respect to the configured workload. We introduce ES WITCH , a novel switch architecture that uses on-the-fly template-based code generation to compile any OpenFlow pipeline into efficient machine code, which can then be readily used as fast path. We present a proof- of-concept prototype and we demonstrate on illustrative use cases that ES WITCH yields a simpler architecture, superior packet processing speed, improved latency and CPU scala- bility, and predictable performance. Our prototype can eas- ily scale beyond 100 Gbps on a single Intel blade even with complex OpenFlow pipelines
    corecore