14 research outputs found

    Exploring Hardware Fault Impacts on Different Real Number Representations of the Structural Resilience of TCUs in GPUs

    Get PDF
    The most recent generations of graphics processing units (GPUs) boost the execution of convolutional operations required by machine learning applications by resorting to specialized and efficient in-chip accelerators (Tensor Core Units or TCUs) that operate on matrix multiplication tiles. Unfortunately, modern cutting-edge semiconductor technologies are increasingly prone to hardware defects, and the trend to highly stress TCUs during the execution of safety-critical and high-performance computing (HPC) applications increases the likelihood of TCUs producing different kinds of failures. In fact, the intrinsic resiliency to hardware faults of arithmetic units plays a crucial role in safety-critical applications using GPUs (e.g., in automotive, space, and autonomous robotics). Recently, new arithmetic formats have been proposed, particularly those suited to neural network execution. However, the reliability characterization of TCUs supporting different arithmetic formats was still lacking. In this work, we quantitatively assessed the impact of hardware faults in TCU structures while employing two distinct formats (floating-point and posit) and using two different configurations (16 and 32 bits) to represent real numbers. For the experimental evaluation, we resorted to an architectural description of a TCU core (PyOpenTCU) and performed 120 fault simulation campaigns, injecting around 200,000 faults per campaign and requiring around 32 days of computation. Our results demonstrate that the posit format of TCUs is less affected by faults than the floating-point one (by up to three orders of magnitude for 16 bits and up to twenty orders for 32 bits). We also identified the most sensible fault locations (i.e., those that produce the largest errors), thus paving the way to adopting smart hardening solutions

    The OpenDC Microservice Simulator: Design, Implementation, and Experimentation

    Full text link
    Microservices is an architectural style that structures an application as a collection of loosely coupled services, making it easy for developers to build and scale their applications. The microservices architecture approach differs from the traditional monolithic style of treating software development as a single entity. Microservice architecture is becoming more and more adapted. However, microservice systems can be complex due to dependencies between the microservices, resulting in unpredictable performance at a large scale. Simulation is a cheap and fast way to investigate the performance of microservices in more detail. This study aims to build a microservices simulator for evaluating and comparing microservices based applications. The microservices reference architecture is designed. The architecture is used as the basis for a simulator. The simulator implementation uses statistical models to generate the workload. The compelling features added to the simulator include concurrent execution of microservices, configurable request depth, three load-balancing policies and four request execution order policies. This paper contains two experiments to show the simulator usage. The first experiment covers request execution order policies at the microservice instance. The second experiment compares load balancing policies across microservice instances.Comment: Bachelor's thesi

    A Novel Optimization for GPU Mining Using Overclocking and Undervolting

    Get PDF
    Cryptography and associated technologies have existed for a long time. This field is advancing at a remarkable speed. Since the inception of its initial application, blockchain has come a long way. Bitcoin is a cryptocurrency based on blockchain, also known as distributed ledger technology (DLT). The most well-known cryptocurrency for everyday use is Bitcoin, which debuted in 2008. Its success ushered in a digital revolution, and it currently provides security, decentralization, and a reliable data transport and storage mechanism to various industries and companies. Governments and developing enterprises seeking a competitive edge have expressed interest in Bitcoin and other cryptocurrencies due to the rapid growth of this recent technology. For computer experts and individuals looking for a method to supplement their income, cryptocurrency mining has become a big source of anxiety. Mining is a way of resolving mathematical problems based on the processing capacity and speed of the computers employed to solve them in return for the digital currency incentives. Herein, we have illustrated benefits of utilizing GPUs (graphical processing units) for cryptocurrency mining and compare two methods, namely overclocking and undervolting, which are the superior techniques when it comes to GPU optimization. The techniques we have used in this paper will not only help the miners to gain profits while mining cryptocurrency but also solve a major flaw; in order to mitigate the energy and resources that are consumed by the mining hardware, we have designed the mining hardware to simultaneously run longer and consume much less electricity. We have also compared our techniques with other popular techniques that are already in existence with respect to GPU mining.publishedVersio

    Improving Performance and Endurance for Crossbar Resistive Memory

    Get PDF
    Resistive Memory (ReRAM) has emerged as a promising non-volatile memory technology that may replace a significant portion of DRAM in future computer systems. When adopting crossbar architecture, ReRAM cell can achieve the smallest theoretical size in fabrication, ideally for constructing dense memory with large capacity. However, crossbar cell structure suffers from severe performance and endurance degradations, which come from large voltage drops on long wires. In this dissertation, I first study the correlation between the ReRAM cell switching latency and the number of cells in low resistant state (LRS) along bitlines, and propose to dynamically speed up write operations based on bitline data patterns. By leveraging the intrinsic in-memory processing capability of ReRAM crossbars, a low overhead runtime profiler that effectively tracks the data patterns in different bitlines is proposed. To achieve further write latency reduction, data compression and row address dependent memory data layout are employed to reduce the numbers of LRS cells on bitlines. Moreover, two optimization techniques are presented to mitigate energy overhead brought by bitline data patterns tracking. Second, I propose XWL, a novel table-based wear leveling scheme for ReRAM crossbars and study the correlation between write endurance and voltage stress in ReRAM crossbars. By estimating and tracking the effective write stress to different rows at runtime, XWL chooses the ones that are stressed the most to mitigate. Additionally, two extended scenarios are further examined for the performance and endurance issues in neural network accelerators as well as 3D vertical ReRAM (3D-VRAM) arrays. For the ReRAM crossbar-based accelerators, by exploiting the wearing out mechanism of ReRAM cell, a novel comprehensive framework, ReNEW, is proposed to enhance the lifetime of the ReRAM crossbar-based accelerators, particularly for neural network training. To reduce the write latency in 3D-VRAM arrays, a collection of techniques, including an in-memory data encoding scheme, a data pattern estimator for assessing cell resistance distributions, and a write time reduction scheme that opportunistically reduces RESET latency with runtime data patterns, are devised

    GPU accelerating distributed succinct de Bruijn graph construction

    Get PDF
    The research and methods in the field of computational biology have grown in the last decades, thanks to the availability of biological data. One of the applications in computational biology is genome sequencing or sequence alignment, a method to arrange sequences of, for example, DNA or RNA, to determine regions of similarity between these sequences. Sequence alignment applications include public health purposes, such as monitoring antimicrobial resistance. Demand for fast sequence alignment has led to the usage of data structures, such as the de Bruijn graph, to store a large amount of information efficiently. De Bruijn graphs are currently one of the top data structures used in indexing genome sequences, and different methods to represent them have been explored. One of these methods is the BOSS data structure, a special case of Wheeler graph index, which uses succinct data structures to represent a de Bruijn graph. As genomes can take a large amount of space, the construction of succinct de Bruijn graphs is slow. This has led to experimental research on using large-scale cluster engines such as Apache Spark and Graphic Processing Units (GPUs) in genome data processing. This thesis explores the use of Apache Spark and Spark RAPIDS, a GPU computing library for Apache Spark, in the construction of a succinct de Bruijn graph index from genome sequences. The experimental results indicate that Spark RAPIDS can provide up to 8 times speedups to specific operations, but for some other operations has severe limitations that limit its processing power in terms of succinct de Bruijn graph index construction

    Harnessing Artificial Intelligence Capabilities Through Cloud Services: a Case Study of Inhibitors and Success Factors

    Get PDF
    Industry and research have recognized the need to adopt and utilize artificial intelligence (AI) to automate and streamline business processes to gain competitive edges. However, developing and running AI algorithms requires a complex IT infrastructure, significant computing power, and sufficient IT expertise, making it unattainable for many organizations. Organizations attempting to build AI solutions in-house often opt to establish an AI center of excellence, accumulating huge costs and extremely long time to value. Fortunately, this deterrence is eliminated by the availability of AI delivered through cloud computing services. The cloud deployment models, Infrastructure as a Service, Platform as a Service, and Software as a Service provide various AI services. IaaS delivers virtualized computing resources over the internet and enables the raw computational power and specialized hardware for building and training AI algorithms. PaaS provides development tools and running environments that assist data scientists and developers in implementing code to bring out AI capabilities. Finally, SaaS offers off-the-shelf AI tools and pre-trained models provided to customers on a commercial basis. Due to the lack of customizability and control of pre-built AI solutions, this empirical investigation focuses merely on IaaS and PaaS-related AI services. The rationale is associated with the complexity of developing, managing and maintaining customized cloud infrastructures and AI solutions that meet a business's actual needs. By applying the Diffusion of Innovation (DOI) theory and the Critical Success Factor (CSF) method, this research explores and identifies the drivers and inhibitors for AI services adoption and critical success factors for harnessing AI capabilities through cloud services.Based on a comprehensive review of the existing literature and a series of nine systematic interviews, this study reveals ten factors that drive- and 17 factors that inhibit the adoption of AI developer tools and infrastructure services. To further aid practitioners and researchers in mitigating the challenges of harnessing AI capabilities, this study identifies four affinity groups of success factors: 1) organizational factors, 2) cloud management factors, 3) technical factors, and 4) the technology commercialization process. Within these categories, nine sub-affinity groups and 20 sets of CSFs are presented

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Operating policies for energy efficient large scale computing

    Get PDF
    PhD ThesisEnergy costs now dominate IT infrastructure total cost of ownership, with datacentre operators predicted to spend more on energy than hardware infrastructure in the next five years. With Western European datacentre power consumption estimated at 56 TWh/year in 2007 and projected to double by 2020, improvements in energy efficiency of IT operations is imperative. The issue is further compounded by social and political factors and strict environmental legislation governing organisations. One such example of large IT systems includes high-throughput cycle stealing distributed systems such as HTCondor and BOINC, which allow organisations to leverage spare capacity on existing infrastructure to undertake valuable computation. As a consequence of increased scrutiny of the energy impact of these systems, aggressive power management policies are often employed to reduce the energy impact of institutional clusters, but in doing so these policies severely restrict the computational resources available for high-throughput systems. These policies are often configured to quickly transition servers and end-user cluster machines into low power states after only short idle periods, further compounding the issue of reliability. In this thesis, we evaluate operating policies for energy efficiency in large-scale computing environments by means of trace-driven discrete event simulation, leveraging real-world workload traces collected within Newcastle University. The major contributions of this thesis are as follows: i) Evaluation of novel energy efficient management policies for a decentralised peer-to-peer (P2P) BitTorrent environment. ii) Introduce a novel simulation environment for the evaluation of energy efficiency of large scale high-throughput computing systems, and propose a generalisable model of energy consumption in high-throughput computing systems. iii iii) Proposal and evaluation of resource allocation strategies for energy consumption in high-throughput computing systems for a real workload. iv) Proposal and evaluation for a realworkload ofmechanisms to reduce wasted task execution within high-throughput computing systems to reduce energy consumption. v) Evaluation of the impact of fault tolerance mechanisms on energy consumption

    2019/2020 University of the Pacific Stockton General Catalog

    Get PDF
    corecore