121 research outputs found

    Error Mitigation Using Approximate Logic Circuits: A Comparison of Probabilistic and Evolutionary Approaches

    Get PDF
    Technology scaling poses an increasing challenge to the reliability of digital circuits. Hardware redundancy solutions, such as triple modular redundancy (TMR), produce very high area overhead, so partial redundancy is often used to reduce the overheads. Approximate logic circuits provide a general framework for optimized mitigation of errors arising from a broad class of failure mechanisms, including transient, intermittent, and permanent failures. However, generating an optimal redundant logic circuit that is able to mask the faults with the highest probability while minimizing the area overheads is a challenging problem. In this study, we propose and compare two new approaches to generate approximate logic circuits to be used in a TMR schema. The probabilistic approach approximates a circuit in a greedy manner based on a probabilistic estimation of the error. The evolutionary approach can provide radically different solutions that are hard to reach by other methods. By combining these two approaches, the solution space can be explored in depth. Experimental results demonstrate that the evolutionary approach can produce better solutions, but the probabilistic approach is close. On the other hand, these approaches provide much better scalability than other existing partial redundancy techniques.This work was supported by the Ministry of Economy and Competitiveness of Spain under project ESP2015-68245-C4-1-P, and by the Czech science foundation project GA16-17538S and the Ministry of Education, Youth and Sports of the Czech Republic from the National Programme of Sustainability (NPU II); project IT4Innovations excellence in science - LQ1602

    Runtime Monitoring for Dependable Hardware Design

    Get PDF
    Mit dem Voranschreiten der Technologieskalierung und der Globalisierung der Produktion von integrierten Schaltkreisen eröffnen sich eine Fülle von Schwachstellen bezüglich der Verlässlichkeit von Computerhardware. Jeder Mikrochip wird aufgrund von Produktionsschwankungen mit einem einzigartigen Charakter geboren, welcher sich durch seine Arbeitsbedingungen, Belastung und Umgebung in individueller Weise entwickelt. Daher sind deterministische Modelle, welche zur Entwurfszeit die Verlässlichkeit prognostizieren, nicht mehr ausreichend um Integrierte Schaltkreise mit Nanometertechnologie sinnvoll abbilden zu können. Der Bedarf einer Laufzeitanalyse des Zustandes steigt und mit ihm die notwendigen Maßnahmen zum Erhalt der Zuverlässigkeit. Transistoren sind anfällig für auslastungsbedingte Alterung, die die Laufzeit der Schaltung erhöht und mit ihr die Möglichkeit einer Fehlberechnung. Hinzu kommen spezielle Abläufe die das schnelle Altern des Chips befördern und somit seine zuverlässige Lebenszeit reduzieren. Zusätzlich können strahlungsbedingte Laufzeitfehler (Soft-Errors) des Chips abnormales Verhalten kritischer Systeme verursachen. Sowohl das Ausbreiten als auch das Maskieren dieser Fehler wiederum sind abhängig von der Arbeitslast des Systems. Fabrizierten Chips können ebenfalls vorsätzlich während der Produktion boshafte Schaltungen, sogenannte Hardwaretrojaner, hinzugefügt werden. Dies kompromittiert die Sicherheit des Chips. Da diese Art der Manipulation vor ihrer Aktivierung kaum zu erfassen ist, ist der Nachweis von Trojanern auf einem Chip direkt nach der Produktion extrem schwierig. Die Komplexität dieser Verlässlichkeitsprobleme machen ein einfaches Modellieren der Zuverlässigkeit und Gegenmaßnahmen ineffizient. Sie entsteht aufgrund verschiedener Quellen, eingeschlossen der Entwicklungsparameter (Technologie, Gerät, Schaltung und Architektur), der Herstellungsparameter, der Laufzeitauslastung und der Arbeitsumgebung. Dies motiviert das Erforschen von maschinellem Lernen und Laufzeitmethoden, welche potentiell mit dieser Komplexität arbeiten können. In dieser Arbeit stellen wir Lösungen vor, die in der Lage sind, eine verlässliche Ausführung von Computerhardware mit unterschiedlichem Laufzeitverhalten und Arbeitsbedingungen zu gewährleisten. Wir entwickelten Techniken des maschinellen Lernens um verschiedene Zuverlässigkeitseffekte zu modellieren, zu überwachen und auszugleichen. Verschiedene Lernmethoden werden genutzt, um günstige Überwachungspunkte zur Kontrolle der Arbeitsbelastung zu finden. Diese werden zusammen mit Zuverlässigkeitsmetriken, aufbauend auf Ausfallsicherheit und generellen Sicherheitsattributen, zum Erstellen von Vorhersagemodellen genutzt. Des Weiteren präsentieren wir eine kosten-optimierte Hardwaremonitorschaltung, welche die Überwachungspunkte zur Laufzeit auswertet. Im Gegensatz zum aktuellen Stand der Technik, welcher mikroarchitektonische Überwachungspunkte ausnutzt, evaluieren wir das Potential von Arbeitsbelastungscharakteristiken auf der Logikebene der zugrundeliegenden Hardware. Wir identifizieren verbesserte Features auf Logikebene um feingranulare Laufzeitüberwachung zu ermöglichen. Diese Logikanalyse wiederum hat verschiedene Stellschrauben um auf höhere Genauigkeit und niedrigeren Overhead zu optimieren. Wir untersuchten die Philosophie, Überwachungspunkte auf Logikebene mit Hilfe von Lernmethoden zu identifizieren und günstigen Monitore zu implementieren um eine adaptive Vorbeugung gegen statisches Altern, dynamisches Altern und strahlungsinduzierte Soft-Errors zu schaffen und zusätzlich die Aktivierung von Hardwaretrojanern zu erkennen. Diesbezüglich haben wir ein Vorhersagemodell entworfen, welches den Arbeitslasteinfluss auf alterungsbedingte Verschlechterungen des Chips mitverfolgt und dazu genutzt werden kann, dynamisch zur Laufzeit vorbeugende Techniken, wie Task-Mitigation, Spannungs- und Frequenzskalierung zu benutzen. Dieses Vorhersagemodell wurde in Software implementiert, welche verschiedene Arbeitslasten aufgrund ihrer Alterungswirkung einordnet. Um die Widerstandsfähigkeit gegenüber beschleunigter Alterung sicherzustellen, stellen wir eine Überwachungshardware vor, welche einen Teil der kritischen Flip-Flops beaufsichtigt, nach beschleunigter Alterung Ausschau hält und davor warnt, wenn ein zeitkritischer Pfad unter starker Alterungsbelastung steht. Wir geben die Implementierung einer Technik zum Reduzieren der durch das Ausführen spezifischer Subroutinen auftretenden Belastung von zeitkritischen Pfaden. Zusätzlich schlagen wir eine Technik zur Abschätzung von online Soft-Error-Schwachstellen von Speicherarrays und Logikkernen vor, welche auf der Überwachung einer kleinen Gruppe Flip-Flops des Entwurfs basiert. Des Weiteren haben wir eine Methode basierend auf Anomalieerkennung entwickelt, um Arbeitslastsignaturen von Hardwaretrojanern während deren Aktivierung zur Laufzeit zu erkennen und somit eine letzte Verteidigungslinie zu bilden. Basierend auf diesen Experimenten demonstriert diese Arbeit das Potential von fortgeschrittener Feature-Extraktion auf Logikebene und lernbasierter Vorhersage basierend auf Laufzeitdaten zur Verbesserung der Zuverlässigkeit von Harwareentwürfen

    Uncertainty evaluation of reservoir simulation models using particle swarms and hierarchical clustering

    Get PDF
    History matching production data in finite difference reservoir simulation models has been and always will be a challenge for the industry. The principal hurdles that need to be overcome are finding a match in the first place and more importantly a set of matches that can capture the uncertainty range of the simulation model and to do this in as short a time as possible since the bottleneck in this process is the length of time taken to run the model. This study looks at the implementation of Particle Swarm Optimisation (PSO) in history matching finite difference simulation models. Particle Swarms are a class of evolutionary algorithms that have shown much promise over the last decade. This method draws parallels from the social interaction of swarms of bees, flocks of birds and shoals of fish. Essentially a swarm of agents are allowed to search the solution hyperspace keeping in memory each individual’s historical best position and iteratively improving the optimisation by the emergent interaction of the swarm. An intrinsic feature of PSO is its local search capability. A sequential niching variation of the PSO has been developed viz. Flexi-PSO that enhances the exploration and exploitation of the hyperspace and is capable of finding multiple minima. This new variation has been applied to history matching synthetic reservoir simulation models to find multiple distinct history 3 matches to try to capture the uncertainty range. Hierarchical clustering is then used to post-process the history match runs to reduce the size of the ensemble carried forward for prediction. The success of the uncertainty modelling exercise is then assessed by checking whether the production profile forecasts generated by the ensemble covers the truth case

    IN-SITU ADDITIVE MANUFACTURING OF METALS FOR EMBEDDING PARTS COMPATIBLE WITH LIQUID METALS TO ENHANCE THERMAL PERFORMANCE OF AVIONICS FOR SPACECRAFT

    Get PDF
    With advances in micromachinery, the aggregation of sensors, and more powerful microcontroller platforms on satellites, the size of avionics for space missions are getting dramatically smaller with faster processing speeds. This has resulted in greater localized heat generation, requiring more reliable thermal management systems to enhance the thermal performance of the avionics. The emergence of advanced additive manufacturing (AM), such as selective laser melting (SLM) and engineering materials, such as low-melting eutectic liquid metal (LM) alloys and synthetics ceramics offer new opportunities for thermal cooling systems. Therefore, there has been an opportunity for adapting in-situ AM to overcome limitations of traditional manufacturing in thermal application, where improvements can be achieved through reducing thermal contract resistance of multi-layer interfaces. This dissertation investigates adapting in-situ AM technologies to embed LM compatible prefabricated components, such as ceramic tubes, inside of metals without the need for a parting surface, resulting in more intimate contact between the metal and ceramic and a reduction in the interfacial thermal resistance. A focus was placed on using more ubiquitous powder bed AM technologies, where it was determined that the morphology of the prefabricated LM compatible ceramic tubes had to be optimized to prevent collision with the apparatus of powder bed based AM. Furthermore, to enhance the wettability of the ceramic tubes during laser fusion, the surfaces were electroplated, resulting in a 1.72X improvement in heat transfer compared to cold plates packaged by conventional assembly. Additionally, multiple AM technologies synergistically complement with cross platform tools such as magnetohydrodynamic (MHD) to solve the corrosion problem in the use of low melting eutectic alloy in geometrically complex patterns as an active cooling system with no moving parts. The MHD pumping system was designed using FEA and CFD simulations to approximate Maxwell and Navier-Stokes equations, were then validated using experiments with model heat exchanger to determine the tradeoff in performance with conventional pumping systems. The MHD cooling prototype was shown to reach volumetric flow rates of up to 650 mm3/sec and generated flow pressure due to Lorentz forces of up to 230 Pa, resulting in heat transfer improvement relative to passive prototype of 1.054

    Integrating supercapacitors into a hybrid energy system to reduce overall costs using the genetic algorithm (GA) and support vector machine (SVM)

    Get PDF
    This research deals with optimising a supercapacitor-battery hybrid energy storage system (SB-HESS) to reduce the implementation cost for solar energy applications using the Genetic Algorithm (GA) and the Support Vector Machine (SVM). The integration of a supercapacitor into a battery energy storage system for solar applications is proven to prolong the battery lifespan. Furthermore, the reliability of the system was optimised using a GA within the Taguchi technique in the supercapacitor fabrication process. This is important to reduce the spread in tolerance of supercapacitors values (i.e. capacitance and Equivalent Series Resistance (ESR)) which affect system performance. One of the more important results obtained in this project is the net present cost (NPC) of the Supercapacitor-battery hybrid energy storage system is 7.51% lower than the conventional battery only system over a 20-years project lifetime. This NPC takes into account of components initial capital cost, replacement cost, maintenance and operational cost. The number of batteries is reduced from 40 (conventional – battery only system) to 24 (SB-HESS) with the inclusion of supercapacitors in the system. This leads to reduction cost in the implemented hybrid energy storage system. A greener renewable energy system is achievable as the number of battery is reduced significantly. An optimised combination of the number of components for renewable energy system is also found. The number of batteries is sized, based on the average power output instead of catering to the peak power burst as in a conventional battery only system. This allows for the reduction in the number of batteries as the peak power is catered for by the presence of the supercapacitor. Subsequent efforts have been focused on the energy management system which is coupled with a supervised learning machine – SVM, switches and sensors are used to forecast the load demand beforehand. This load predictive-energy management system is implemented on a lab-scaled hybrid energy storage system prototype. Results obtained also show that this load predictive system allows for accurate load classification and prediction. The supercapacitor in the hybrid energy storage system is able to switch on to cater for peak power without delay. This is crucial in maintaining an optimised battery depth-of-discharge (DOD) in order to reduce the rate of battery damage thru a degradation mechanism which is caused from particular stress factors (especially sulphation on the battery electrode and electrolyte stratification)

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems

    Electric Vehicle Efficient Power and Propulsion Systems

    Get PDF
    Vehicle electrification has been identified as one of the main technology trends in this second decade of the 21st century. Nearly 10% of global car sales in 2021 were electric, and this figure would be 50% by 2030 to reduce the oil import dependency and transport emissions in line with countries’ climate goals. This book addresses the efficient power and propulsion systems which cover essential topics for research and development on EVs, HEVs and fuel cell electric vehicles (FCEV), including: Energy storage systems (battery, fuel cell, supercapacitors, and their hybrid systems); Power electronics devices and converters; Electric machine drive control, optimization, and design; Energy system advanced management methods Primarily intended for professionals and advanced students who are working on EV/HEV/FCEV power and propulsion systems, this edited book surveys state of the art novel control/optimization techniques for different components, as well as for vehicle as a whole system. New readers may also find valuable information on the structure and methodologies in such an interdisciplinary field. Contributed by experienced authors from different research laboratory around the world, these 11 chapters provide balanced materials from theorical background to methodologies and practical implementation to deal with various issues of this challenging technology. This reprint encourages researchers working in this field to stay actualized on the latest developments on electric vehicle efficient power and propulsion systems, for road and rail, both manned and unmanned vehicles

    Efficiency and Optimization of Buildings Energy Consumption: Volume II

    Get PDF
    This reprint, as a continuation of a previous Special Issue entitled “Efficiency and Optimization of Buildings Energy Consumption”, gives an up-to-date overview of new technologies based on Machine Learning (ML) and Internet of Things (IoT) procedures to improve the mathematical approach of algorithms that allow control systems to be improved with the aim of reducing housing sector energy consumption
    corecore