65 research outputs found

    Designs for increasing reliability while reducing energy and increasing lifetime

    Get PDF
    In the last decades, the computing technology experienced tremendous developments. For instance, transistors' feature size shrank to half at every two years as consistently from the first time Moore stated his law. Consequently, number of transistors and core count per chip doubles at each generation. Similarly, petascale systems that have the capability of processing more than one billion calculation per second have been developed. As a matter of fact, exascale systems are predicted to be available at year 2020. However, these developments in computer systems face a reliability wall. For instance, transistor feature sizes are getting so small that it becomes easier for high-energy particles to temporarily flip the state of a memory cell from 1-to-0 or 0-to-1. Also, even if we assume that fault-rate per transistor stays constant with scaling, the increase in total transistor and core count per chip will significantly increase the number of faults for future desktop and exascale systems. Moreover, circuit ageing is exacerbated due to increased manufacturing variability and thermal stresses, therefore, lifetime of processor structures are becoming shorter. On the other side, due to the limited power budget of the computer systems such that mobile devices, it is attractive to scale down the voltage. However, when the voltage level scales to beyond the safe margin especially to the ultra-low level, the error rate increases drastically. Nevertheless, new memory technologies such as NAND flashes present only limited amount of nominal lifetime, and when they exceed this lifetime, they can not guarantee storing of the data correctly leading to data retention problems. Due to these issues, reliability became a first-class design constraint for contemporary computing in addition to power and performance. Moreover, reliability even plays increasingly important role when computer systems process sensitive and life-critical information such as health records, financial information, power regulation, transportation, etc. In this thesis, we present several different reliability designs for detecting and correcting errors occurring in processor pipelines, L1 caches and non-volatile NAND flash memories due to various reasons. We design reliability solutions in order to serve three main purposes. Our first goal is to improve the reliability of computer systems by detecting and correcting random and non-predictable errors such as bit flips or ageing errors. Second, we aim to reduce the energy consumption of the computer systems by allowing them to operate reliably at ultra-low voltage level. Third, we target to increase the lifetime of new memory technologies by implementing efficient and low-cost reliability schemes

    Aggressive undervolting of FPGAs : power & reliability trade-offs

    Get PDF
    In this work, we evaluate aggressive undervolting, i.e., voltage underscaling below the nominal level to reduce the energy consumption of Field Programmable Gate Arrays (FPGAs). Usually, voltage guardbands are added by chip vendors to ensure the worst-case process and environmental scenarios. Through experimenting on several FPGA architectures, we con¿rm a large voltage guardband for several FPGA components, which in turn, delivers signi¿cant power savings. However, further undervolting below the voltage guardband may cause reliability issues as the result of the circuit delay increase, and faults might start to appear. We extensively characterize the behavior of these faults in terms of the rate, location, type, as well as sensitivity to environmental temperature, primarily focusing on FPGA on-chip memories, or Block RAMs (BRAMs). Understanding this behavior can allow to deploy ef¿cient mitigation techniques, and in turn, FPGA-based designs can be improved for better energy, reliability, and performance trade-offs. Finally, as a case study, we evaluate a typical FPGA-based Neural Network (NN) accelerator when the FPGA voltage is underscaled. In consequence, the substantial NN energy savings come with the cost of NN accuracy loss. To attain power savings without NN accuracy loss below the voltage guardband gap, we proposed an application-aware technique and we also, evaluated the built-in Error-Correcting Code (ECC) mechanism. Hence, First, we developed an application-dependent BRAMs placement technique that relies on the deterministic behavior of undervolting faults, and mitigates these faults by mapping the most reliability sensitive NN parameters to BRAM blocks that are relatively more resistant to undervolting faults. Second, as a more general technique, we applied the built-in ECC of BRAMs and observed a signi¿cant fault coverage capability thanks to the behavior of undervolting faults, with a negligible power consumption overhead.En este trabajo, evaluamos el reducir el voltaje en forma agresiva, es decir, bajar la tensión por debajo del nivel nominal para reducir el consumo de energía en Field Programmable Gate Arrays (FPGA). Por lo general, los vendedores de chips establecen margen de seguridad al voltaje para garantizar el funcionamiento de los mismos en el peor de los casos y en los peores escenarios ambientales. Mediante la experimentación en varias arquitecturas FPGA, confirmamos que hay un margen de seguridad de voltaje grande en varios de los componentes de la FPGA, que a su vez, nos ofrece ahorros de energía significativos. Sin embargo, un trabajar a un voltaje por debajo del margen de seguridad del voltaje puede causar problemas de confiabilidad a medida ya que aumenta el retardo del circuito y pueden comenzar a aparecer fallos. Caracterizamos ampliamente el comportamiento de estos fallos en términos de velocidad, ubicación, tipo, así como la sensibilidad a la temperatura ambiental, centrándonos principalmente en memorias internas de la FPGA, o Block RAM (BRAM). Comprender este comportamiento puede permitir el desarrollo de técnicas eficientes de mitigación y, a su vez, mejorar los diseños basados en FPGA para obtener ahorros en energía, una mayor confiabilidad y un mayor rendimiento. Finalmente, como caso de estudio, evaluamos un acelerador típico de Redes Neuronales basado en FPGA cuando el voltaje de la FPGA esta por debajo del nivel mínimo de seguridad. En consecuencia, los considerables ahorros de energía de la red neuronal vienen asociados con la pérdida de precisión de la red neuronal. Para obtener ahorros de energía sin una pérdida de precisión en la red neuronal por debajo del margen de seguridad del voltaje, proponemos una técnica que tiene en cuenta la aplicación, asi mismo, evaluamos el mecanismo integrado en las BRAMs de Error Correction Code (ECC). Por lo tanto, en primer lugar, desarrollamos una técnica de colocación de BRAM dependiente de la aplicación que se basa en el comportamiento determinista de las fallos cuando la FPGA funciona por debajo del margen de seguridad, y se mitigan estos fallos asignando los parámetros de la red neuronal más sensibles a producir fallos a los bloques BRAM que son relativamente más resistentes a los fallos. En segundo lugar, como técnica más general, aplicamos el ECC incorporado de los BRAM y observamos una capacidad de cobertura de fallos significativo gracias a las características de comportamiento de fallos, con una sobrecoste de consumo de energía insignificantePostprint (published version

    Integrated Circuits/Microchips

    Get PDF
    With the world marching inexorably towards the fourth industrial revolution (IR 4.0), one is now embracing lives with artificial intelligence (AI), the Internet of Things (IoTs), virtual reality (VR) and 5G technology. Wherever we are, whatever we are doing, there are electronic devices that we rely indispensably on. While some of these technologies, such as those fueled with smart, autonomous systems, are seemingly precocious; others have existed for quite a while. These devices range from simple home appliances, entertainment media to complex aeronautical instruments. Clearly, the daily lives of mankind today are interwoven seamlessly with electronics. Surprising as it may seem, the cornerstone that empowers these electronic devices is nothing more than a mere diminutive semiconductor cube block. More colloquially referred to as the Very-Large-Scale-Integration (VLSI) chip or an integrated circuit (IC) chip or simply a microchip, this semiconductor cube block, approximately the size of a grain of rice, is composed of millions to billions of transistors. The transistors are interconnected in such a way that allows electrical circuitries for certain applications to be realized. Some of these chips serve specific permanent applications and are known as Application Specific Integrated Circuits (ASICS); while, others are computing processors which could be programmed for diverse applications. The computer processor, together with its supporting hardware and user interfaces, is known as an embedded system.In this book, a variety of topics related to microchips are extensively illustrated. The topics encompass the physics of the microchip device, as well as its design methods and applications

    Proceedings, MSVSCC 2015

    Get PDF
    The Virginia Modeling, Analysis and Simulation Center (VMASC) of Old Dominion University hosted the 2015 Modeling, Simulation, & Visualization Student capstone Conference on April 16th. The Capstone Conference features students in Modeling and Simulation, undergraduates and graduate degree programs, and fields from many colleges and/or universities. Students present their research to an audience of fellow students, faculty, judges, and other distinguished guests. For the students, these presentations afford them the opportunity to impart their innovative research to members of the M&S community from academic, industry, and government backgrounds. Also participating in the conference are faculty and judges who have volunteered their time to impart direct support to their students’ research, facilitate the various conference tracks, serve as judges for each of the tracks, and provide overall assistance to this conference. 2015 marks the ninth year of the VMASC Capstone Conference for Modeling, Simulation and Visualization. This year our conference attracted a number of fine student written papers and presentations, resulting in a total of 51 research works that were presented. This year’s conference had record attendance thanks to the support from the various different departments at Old Dominion University, other local Universities, and the United States Military Academy, at West Point. We greatly appreciated all of the work and energy that has gone into this year’s conference, it truly was a highly collaborative effort that has resulted in a very successful symposium for the M&S community and all of those involved. Below you will find a brief summary of the best papers and best presentations with some simple statistics of the overall conference contribution. Followed by that is a table of contents that breaks down by conference track category with a copy of each included body of work. Thank you again for your time and your contribution as this conference is designed to continuously evolve and adapt to better suit the authors and M&S supporters. Dr.Yuzhong Shen Graduate Program Director, MSVE Capstone Conference Chair John ShullGraduate Student, MSVE Capstone Conference Student Chai
    • …
    corecore