12 research outputs found

    Thermoelectric Cooling to Survive Commodity DRAMs in Harsh Environment Automotive Electronics

    Get PDF
    Today, more and more commodity hardware devices are used in safety-critical applications, such as advanced driver assistance systems in automotive. These applications demand very high reliability of electronic components even in adverse environmental conditions, such as high temperatures. Ensuring the reliability of microelectronic components is a major challenge at these high temperatures. The computing systems of these applications rely on DRAMs as working memory, which are built upon bit cells that store charges in capacitors. These commodity DRAMs are optimized for cost per bit and not for high reliability. Thus, very high temperatures impose an enormous challenge for commodity DRAMs as the data retention time and reliability decrease largely, affecting the data correctness. Data correctness can be ensured up to certain temperatures by increasing the refresh rate to counterbalance the retention time reduction. However, this severely degrades the access latencies and the usable DRAM bandwidth. To overcome these limitations, we present for the first time a Thermoelectric Cooling (TEC) solution for commodity DRAMs in harsh-environments, such as automotive. Our TEC solution enables the use of commodity off-the-shelf DRAMs in safety-critical applications by reducing the temperature conditions to a range where they can operate reliably. This TEC solution is applied a posteriori to the DRAM chips without using high-cost package solutions. Thus, it maintains the low-cost targets of such devices, improves the reliability, and at the same time, counterbalances the adverse effects of increasing the refresh rate. To quantitatively evaluate the benefits of TEC on commodity DRAMs in harsh-environments, we performed system-level evaluations with several applications backed up by the measured data on commodity DRAMs. Our experimental results, using accurate multi-physics simulations that employ finite element method, demonstrate that the TEC-based cooling ensures that the maxim..

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Retention time measurements and modelling of bit error rates of WIDE I/O DRAM in MPSoCs

    No full text
    DRAM cells use capacitors as volatile and leaky bit storage elements. The time spent without refreshing them is called retention time. It is well known that the retention time depends inverse exponentially on the temperature. In 3D stacking, the challenges of high power densities and thermal dissipation are exacerbated and have a much stronger impact on the retention time of 3D-stacked WIDE I/O DRAMs that are placed on top of an MPSoC. Consequently, it is very important to study the temperature behaviour of WIDE I/O DRAMs. To the best of our knowledge, no investigations based on real measurements were done for stacked DRAM-on-logic devices. In this paper, we first provide detailed measurements on temperature-dependent retention time and bit error rates of WIDE I/O DRAMs. To obtain the correct temperature distribution of the WIDE-I/O DRAM die we use an advanced thermal modelling tool: the DOCEA AceThermalModeler™ (ATM). The WIDE I/O DRAM retention times and bit error rates are compared to the behaviour of 2D-DRAM chips (DIMMs) with the help of an advanced FPGA-based test system. We observed data pattern dependencies and variable retention times (VRTs). Second, based on this data, we develop and validate a SystemC-TLM2.0 DRAM bit error rate model. Our proposed DRAM bit error model enables early investigations on the temperature vs. retention time trade-off in future 3D-stacked MPSoCs with WIDE I/O DRAMs in SystemC-TLM2.0 environme

    Retention time measurements and modelling of bit error rates of WIDE I/O DRAM in MPSoCs

    No full text
    DRAM cells use capacitors as volatile and leaky bit storage elements. The time spent without refreshing them is called retention time. It is well known that the retention time depends inverse exponentially on the temperature. In 3D stacking, the challenges of high power densities and thermal dissipation are exacerbated and have a much stronger impact on the retention time of 3D-stacked WIDE I/O DRAMs that are placed on top of an MPSoC. Consequently, it is very important to study the temperature behaviour of WIDE I/O DRAMs. To the best of our knowledge, no investigations based on real measurements were done for stacked DRAM-on-logic devices. In this paper, we first provide detailed measurements on temperature-dependent retention time and bit error rates of WIDE I/O DRAMs. To obtain the correct temperature distribution of the WIDE-I/O DRAM die we use an advanced thermal modelling tool: the DOCEA AceThermalModeler™ (ATM). The WIDE I/O DRAM retention times and bit error rates are compared to the behaviour of 2D-DRAM chips (DIMMs) with the help of an advanced FPGA-based test system. We observed data pattern dependencies and variable retention times (VRTs). Second, based on this data, we develop and validate a SystemC-TLM2.0 DRAM bit error rate model. Our proposed DRAM bit error model enables early investigations on the temperature vs. retention time trade-off in future 3D-stacked MPSoCs with WIDE I/O DRAMs in SystemC-TLM2.0 environme

    Cache memory design in the FinFET era

    Get PDF
    The major problem in the future technology scaling is the variations in process parameters that are interpreted as imperfections in the development process. Moreover, devices are more sensitive to the environmental changes of temperature and supply volt- age as well as to ageing. All these influences are manifested in the integrated circuits as increased power consumption, reduced maximal operating frequency and increased number of failures. These effects have been partially overcome with the introduction of the FinFET technology which have solved the problem of variability caused by Random Dopant Fluctuations. However, in the next ten years channel length is projected to shrink to 10nm where the variability source generated by Line Edge Roughness will dominate, and its effects on the threshold voltage variations will become critical. The embedded memories with their cells as the basic building unit are the most prone to these effects due to their the smallest dimensions. Because of that, memories should be designed with particular care in order to make possible further technology scaling. This thesis explores upcoming 10nm FinFETs and the existing issues in the cache memory design with this technology. More- over, it tries to present some original and novel techniques on the different level of design abstraction for mitigating the effects of process and environmental variability. At first original method for simulating variability of Tri-Gate Fin- FETs is presented using conventional HSPICE simulation environment and BSIM-CMG model cards. When that is accomplished, thorough characterisation of traditional SRAM cell circuits (6T and 8T) is performed. Possibility of using Independent Gate FinFETs for increasing cell stability has been explored, also. Gain Cells appeared in the recent past as an attractive alternative for in the cache memory design. This thesis partially explores this idea by presenting and performing detailed circuit analysis of the dynamic 3T gain cell for 10nm FinFETs. At the top of this work, thesis shows one micro-architecture optimisation of high-speed cache when it is implemented by 3T gain cells. We show how the cache coherency states can be used in order to reduce refresh energy of the memory as well as reduce memory ageing.El principal problema de l'escalat la tecnologia són les variacions en els paràmetres de disseny (imperfeccions) durant procés de fabricació. D'altra banda, els dispositius també són més sensibles als canvis ambientals de temperatura, la tensió d'alimentació, així com l'envelliment. Totes aquestes influències es manifesten en els circuits integrats com l'augment de consum d'energia, la reducció de la freqüència d'operació màxima i l'augment del nombre de xips descartats. Aquests efectes s'han superat parcialment amb la introducció de la tecnologia FinFET que ha resolt el problema de la variabilitat causada per les fluctuacions de dopants aleatòries. No obstant això, en els propers deu anys, l'ample del canal es preveu que es reduirà a 10nm, on la font de la variabilitat generada per les rugositats de les línies de material dominarà, i els seu efecte en les variacions de voltatge llindar augmentarà. Les memòries encastades amb les seves cel·les com la unitat bàsica de construcció són les més propenses a sofrir aquests efectes a causa de les seves dimensions més petites. A causa d'això, cal dissenyar les memòries amb una especial cura per tal de fer possible l'escalat de la tecnologia. Aquesta tesi explora la tecnologia de FinFETs de 10nm i els problemes existents en el disseny de memòries amb aquesta tecnologia. A més a més, presentem noves tècniques originals sobre diferents nivells d'abstracció del disseny per a la mitigació dels efectes les variacions tan de procés com ambientals. En primer lloc, presentem un mètode original per a la simulació de la variabilitat de Tri-Gate FinFETs usant entorn de simulació HSPICE convencional i models de tecnologia BSIMCMG. Després, es realitza la caracterització completa dels circuits de cel·les SRAM tradicionals (6T i 8T) conjuntament amb l'ús de Gate-independent FinFETs per augmentar l'estabilitat de la cèl·lula

    A reconfigurable mixed-time-criticality SDRAM controller

    Get PDF

    Architektur- und Leistungsanalyse eines Mehgenerationen-SDRAM-Controllers für gemischte Kritikalitätssysteme

    Get PDF
    Due to their high-density and low-cost, DDR SDRAM are the prevailing choice for implementing the main memory of a computer system. Nevertheless, the aforementioned benefits come at the cost of a complex two-stage access protocol, which ultimately means that the time required to serve a memory request depends on the history of previous requests. Otherly stated, DDR SDRAMs are a stateful resource. The main goal of this dissertation is to design a controller that leverages the state of DDR SDRAMs in a mixed criticality environment. More specifically, the controller should provide good average performance for best-effort requestors without compromising timing guarantees for critical requestors. With that regard, this dissertation firstly identifies two challenges of growing relevance for the design of memory controllers for the mixed criticality domain. The first challenge is the data bus turnaround time. The second challenge is the rank-to-rank switching time and only affects multi-rank modules. After pinpointing the two aforementioned challenges, this dissertation proposes a SDRAM controller to tackle them. The proposed controller bundles read and write operations in their corresponding ranks, thus minimizing the number of data bus turnarounds and rank switching events. As a consequence, the average performance of the controller is improved. However, the bundling is carefully designed so that real-time guarantees for critical requestors can be extracted. Moreover, as it will become clear, both the operation of the controller and the corresponding analysis of the temporal properties are described in terms of a generation-independent notation. This is a desirable feature because different SDRAM generations have different architectural features and possibly, timing constraints. Finally, an extensive comparison with the related work is performed. Furthermore, trends in worst-case latency over DDR SDRAM from different speed bins and generations are presented and thoroughly discussed.Aufgrund ihrer hohen Dichte und geringen Kosten sind DDR SDRAM die vorherrschende Wahl für die Implementierung des Hauptspeichers eines Computersystems. Die oben genannten Vorteile gehen jedoch zu Lasten eines komplexen zweistufigen Zugriffsprotokolls, was letztendlich bedeutet, dass die Zeit, die benötigt wird, um eine Speicheranforderung zu bedienen, von der Historie früherer Zugriffe abhängt. Anders ausgedrückt, DDR SDRAM sind eine zustandsabhängige Ressource, was die Umsetzung gemischter Kritikalitäten weiter erschwert, da unterschiedliche Ebenen der Kritikalität widersprüchliche Bedürfnisse haben. Das Hauptziel dieser Dissertation ist es, einen Controller zu entwickeln, der den Zustand der DDR-SDRAMs in einer gemischten Kritikalitätsumgebung nutzt. Genauer gesagt, der Controller soll eine gute durchschnittliche Leistung für best-effort Zugriffe ermöglichen, ohne die Garantien für kritische Zugriffe zu gefährden. In diesem Zusammenhang identifiziert diese Dissertation zunächst zwei Herausforderungen von wachsender Relevanz für das Design von Speichercontrollern für Systeme gemischter Kritikalität. Die erste Herausforderung ist die notwendige Zeit zur Richtungsänderung des Datenbusses. Die zweite Herausforderung ist die Rang-zu-Rang-Schaltzeit und betrifft nur Module mit mehreren Rängen. Nach dem Aufzeigen der beiden oben genannten Herausforderungen, schlägt diese Dissertation einen SDRAM Controller vor, um sie anzugehen. Der vorgeschlagene Controller bündelt Lese und Schreib Operationen in ihren entsprechenden Rängen, wodurch die Anzahl der Richtungsänderungen des Datenbusses und die Anzahl der Rangwechsel minimiert wird. Dadurch wird die durchschnittliche Leistung des Controllers verbessert. Die Bündelung ist so konzipiert, dass Echtzeit-Garantien für kritische Zugriffe abgeleitet werden können. Darüber hinaus werden, wie sich zeigen wird, sowohl das Verhalten des Controllers als auch die entsprechende Analyse der zeitlichen Eigenschaften in Form einer generationsunabhängigen Notation beschrieben. Dies ist ein wünschenswertes Merkmal, da verschiedene SDRAM Generationen unterschiedliche architektonische Merkmale und zeitliche Beschränkungen haben. Abschließend wird ein ausführlicher Vergleich mit inhaltlich verwandten Arbeiten durchgeführt. Außerdem werden Trends in der Worst-Case-Latenz von DDR SDRAM aus verschiedenen Geschwindigkeitsklassen und Generationen vorgestellt und ausführlich diskutiert

    Adaptive Intelligent Systems for Extreme Environments

    Get PDF
    As embedded processors become powerful, a growing number of embedded systems equipped with artificial intelligence (AI) algorithms have been used in radiation environments to perform routine tasks to reduce radiation risk for human workers. On the one hand, because of the low price, commercial-off-the-shelf devices and components are becoming increasingly popular to make such tasks more affordable. Meanwhile, it also presents new challenges to improve radiation tolerance, the capability to conduct multiple AI tasks and deliver the power efficiency of the embedded systems in harsh environments. There are three aspects of research work that have been completed in this thesis: 1) a fast simulation method for analysis of single event effect (SEE) in integrated circuits, 2) a self-refresh scheme to detect and correct bit-flips in random access memory (RAM), and 3) a hardware AI system with dynamic hardware accelerators and AI models for increasing flexibility and efficiency. The variances of the physical parameters in practical implementation, such as the nature of the particle, linear energy transfer and circuit characteristics, may have a large impact on the final simulation accuracy, which will significantly increase the complexity and cost in the workflow of the transistor level simulation for large-scale circuits. It makes it difficult to conduct SEE simulations for large-scale circuits. Therefore, in the first research work, a new SEE simulation scheme is proposed, to offer a fast and cost-efficient method to evaluate and compare the performance of large-scale circuits which subject to the effects of radiation particles. The advantages of transistor and hardware description language (HDL) simulations are combined here to produce accurate SEE digital error models for rapid error analysis in large-scale circuits. Under the proposed scheme, time-consuming back-end steps are skipped. The SEE analysis for large-scale circuits can be completed in just few hours. In high-radiation environments, bit-flips in RAMs can not only occur but may also be accumulated. However, the typical error mitigation methods can not handle high error rates with low hardware costs. In the second work, an adaptive scheme combined with correcting codes and refreshing techniques is proposed, to correct errors and mitigate error accumulation in extreme radiation environments. This scheme is proposed to continuously refresh the data in RAMs so that errors can not be accumulated. Furthermore, because the proposed design can share the same ports with the user module without changing the timing sequence, it thus can be easily applied to the system where the hardware modules are designed with fixed reading and writing latency. It is a challenge to implement intelligent systems with constrained hardware resources. In the third work, an adaptive hardware resource management system for multiple AI tasks in harsh environments was designed. Inspired by the “refreshing” concept in the second work, we utilise a key feature of FPGAs, partial reconfiguration, to improve the reliability and efficiency of the AI system. More importantly, this feature provides the capability to manage the hardware resources for deep learning acceleration. In the proposed design, the on-chip hardware resources are dynamically managed to improve the flexibility, performance and power efficiency of deep learning inference systems. The deep learning units provided by Xilinx are used to perform multiple AI tasks simultaneously, and the experiments show significant improvements in power efficiency for a wide range of scenarios with different workloads. To further improve the performance of the system, the concept of reconfiguration was further extended. As a result, an adaptive DL software framework was designed. This framework can provide a significant level of adaptability support for various deep learning algorithms on an FPGA-based edge computing platform. To meet the specific accuracy and latency requirements derived from the running applications and operating environments, the platform may dynamically update hardware and software (e.g., processing pipelines) to achieve better cost, power, and processing efficiency compared to the static system
    corecore