121 research outputs found
Error Mitigation Using Approximate Logic Circuits: A Comparison of Probabilistic and Evolutionary Approaches
Technology scaling poses an increasing challenge to the reliability of digital circuits. Hardware redundancy solutions, such as triple modular redundancy (TMR), produce very high area overhead, so partial redundancy is often used to reduce the overheads. Approximate logic circuits provide a general framework for optimized mitigation of errors arising from a broad class of failure mechanisms, including transient, intermittent, and permanent failures. However, generating an optimal redundant logic circuit that is able to mask the faults with the highest probability while minimizing the area overheads is a challenging problem. In this study, we propose and compare two new approaches to generate approximate logic circuits to be used in a TMR schema. The probabilistic approach approximates a circuit in a greedy manner based on a probabilistic estimation of the error. The evolutionary approach can provide radically different solutions that are hard to reach by other methods. By combining these two approaches, the solution space can be explored in depth. Experimental results demonstrate that the evolutionary approach can produce better solutions, but the probabilistic approach is close. On the other hand, these approaches provide much better scalability than other existing partial redundancy techniques.This work was supported by the Ministry of Economy and Competitiveness of Spain under project ESP2015-68245-C4-1-P, and by the Czech science foundation project GA16-17538S and the Ministry of Education, Youth and Sports of the Czech Republic from the National Programme of Sustainability (NPU II); project IT4Innovations excellence in science - LQ1602
Runtime Monitoring for Dependable Hardware Design
Mit dem Voranschreiten der Technologieskalierung und der Globalisierung der Produktion von integrierten Schaltkreisen eröffnen sich eine Fülle von Schwachstellen bezüglich der Verlässlichkeit von Computerhardware. Jeder Mikrochip wird aufgrund von Produktionsschwankungen mit einem einzigartigen Charakter geboren, welcher sich durch seine Arbeitsbedingungen, Belastung und Umgebung in individueller Weise entwickelt. Daher sind deterministische Modelle, welche zur Entwurfszeit die Verlässlichkeit prognostizieren, nicht mehr ausreichend um Integrierte Schaltkreise mit Nanometertechnologie sinnvoll abbilden zu können. Der Bedarf einer Laufzeitanalyse des Zustandes steigt und mit ihm die notwendigen Maßnahmen zum Erhalt der Zuverlässigkeit.
Transistoren sind anfällig für auslastungsbedingte Alterung, die die Laufzeit der Schaltung erhöht und mit ihr die Möglichkeit einer Fehlberechnung. Hinzu kommen spezielle Abläufe die das schnelle Altern des Chips befördern und somit seine zuverlässige Lebenszeit reduzieren. Zusätzlich können strahlungsbedingte Laufzeitfehler (Soft-Errors) des Chips abnormales Verhalten kritischer Systeme verursachen. Sowohl das Ausbreiten als auch das Maskieren dieser Fehler wiederum sind abhängig von der Arbeitslast des Systems. Fabrizierten Chips können ebenfalls vorsätzlich während der Produktion boshafte Schaltungen, sogenannte Hardwaretrojaner, hinzugefügt werden. Dies kompromittiert die Sicherheit des Chips. Da diese Art der Manipulation vor ihrer Aktivierung kaum zu erfassen ist, ist der Nachweis von Trojanern auf einem Chip direkt nach der Produktion extrem schwierig.
Die Komplexität dieser Verlässlichkeitsprobleme machen ein einfaches Modellieren der Zuverlässigkeit und Gegenmaßnahmen ineffizient. Sie entsteht aufgrund verschiedener Quellen, eingeschlossen der Entwicklungsparameter (Technologie, Gerät, Schaltung und Architektur), der Herstellungsparameter, der Laufzeitauslastung und der Arbeitsumgebung. Dies motiviert das Erforschen von maschinellem Lernen und Laufzeitmethoden, welche potentiell mit dieser Komplexität arbeiten können.
In dieser Arbeit stellen wir Lösungen vor, die in der Lage sind, eine verlässliche Ausführung von Computerhardware mit unterschiedlichem Laufzeitverhalten und Arbeitsbedingungen zu gewährleisten. Wir entwickelten Techniken des maschinellen Lernens um verschiedene Zuverlässigkeitseffekte zu modellieren, zu überwachen und auszugleichen. Verschiedene Lernmethoden werden genutzt, um günstige Überwachungspunkte zur Kontrolle der Arbeitsbelastung zu finden. Diese werden zusammen mit Zuverlässigkeitsmetriken, aufbauend auf Ausfallsicherheit und generellen Sicherheitsattributen, zum Erstellen von Vorhersagemodellen genutzt. Des Weiteren präsentieren wir eine kosten-optimierte Hardwaremonitorschaltung, welche die Überwachungspunkte zur Laufzeit auswertet. Im Gegensatz zum aktuellen Stand der Technik, welcher mikroarchitektonische Überwachungspunkte ausnutzt, evaluieren wir das Potential von Arbeitsbelastungscharakteristiken auf der Logikebene der zugrundeliegenden Hardware. Wir identifizieren verbesserte Features auf Logikebene um feingranulare Laufzeitüberwachung zu ermöglichen. Diese Logikanalyse wiederum hat verschiedene Stellschrauben um auf höhere Genauigkeit und niedrigeren Overhead zu optimieren.
Wir untersuchten die Philosophie, Überwachungspunkte auf Logikebene mit Hilfe von Lernmethoden zu identifizieren und günstigen Monitore zu implementieren um eine adaptive Vorbeugung gegen statisches Altern, dynamisches Altern und strahlungsinduzierte Soft-Errors zu schaffen und zusätzlich die Aktivierung von Hardwaretrojanern zu erkennen.
Diesbezüglich haben wir ein Vorhersagemodell entworfen, welches den Arbeitslasteinfluss auf alterungsbedingte Verschlechterungen des Chips mitverfolgt und dazu genutzt werden kann, dynamisch zur Laufzeit vorbeugende Techniken, wie Task-Mitigation, Spannungs- und Frequenzskalierung zu benutzen.
Dieses Vorhersagemodell wurde in Software implementiert, welche verschiedene Arbeitslasten aufgrund ihrer Alterungswirkung einordnet. Um die Widerstandsfähigkeit gegenüber beschleunigter Alterung sicherzustellen, stellen wir eine Überwachungshardware vor, welche einen Teil der kritischen Flip-Flops beaufsichtigt, nach beschleunigter Alterung Ausschau hält und davor warnt, wenn ein zeitkritischer Pfad unter starker Alterungsbelastung steht. Wir geben die Implementierung einer Technik zum Reduzieren der durch das Ausführen spezifischer Subroutinen auftretenden Belastung von zeitkritischen Pfaden. Zusätzlich schlagen wir eine Technik zur Abschätzung von online Soft-Error-Schwachstellen von Speicherarrays und Logikkernen vor, welche auf der Überwachung einer kleinen Gruppe Flip-Flops des Entwurfs basiert.
Des Weiteren haben wir eine Methode basierend auf Anomalieerkennung entwickelt, um Arbeitslastsignaturen von Hardwaretrojanern während deren Aktivierung zur Laufzeit zu erkennen und somit eine letzte Verteidigungslinie zu bilden. Basierend auf diesen Experimenten demonstriert diese Arbeit das Potential von fortgeschrittener Feature-Extraktion auf Logikebene und lernbasierter Vorhersage basierend auf Laufzeitdaten zur Verbesserung der Zuverlässigkeit von Harwareentwürfen
Uncertainty evaluation of reservoir simulation models using particle swarms and hierarchical clustering
History matching production data in finite difference reservoir simulation
models has been and always will be a challenge for the industry. The
principal hurdles that need to be overcome are finding a match in the first
place and more importantly a set of matches that can capture the uncertainty
range of the simulation model and to do this in as short a time as possible
since the bottleneck in this process is the length of time taken to run the
model. This study looks at the implementation of Particle Swarm
Optimisation (PSO) in history matching finite difference simulation models.
Particle Swarms are a class of evolutionary algorithms that have shown
much promise over the last decade. This method draws parallels from the
social interaction of swarms of bees, flocks of birds and shoals of fish.
Essentially a swarm of agents are allowed to search the solution hyperspace
keeping in memory each individual’s historical best position and iteratively
improving the optimisation by the emergent interaction of the swarm. An
intrinsic feature of PSO is its local search capability. A sequential niching
variation of the PSO has been developed viz. Flexi-PSO that enhances the
exploration and exploitation of the hyperspace and is capable of finding
multiple minima. This new variation has been applied to history matching
synthetic reservoir simulation models to find multiple distinct history
3
matches to try to capture the uncertainty range. Hierarchical clustering is
then used to post-process the history match runs to reduce the size of the
ensemble carried forward for prediction.
The success of the uncertainty modelling exercise is then assessed by
checking whether the production profile forecasts generated by the ensemble
covers the truth case
IN-SITU ADDITIVE MANUFACTURING OF METALS FOR EMBEDDING PARTS COMPATIBLE WITH LIQUID METALS TO ENHANCE THERMAL PERFORMANCE OF AVIONICS FOR SPACECRAFT
With advances in micromachinery, the aggregation of sensors, and more powerful microcontroller platforms on satellites, the size of avionics for space missions are getting dramatically smaller with faster processing speeds. This has resulted in greater localized heat generation, requiring more reliable thermal management systems to enhance the thermal performance of the avionics. The emergence of advanced additive manufacturing (AM), such as selective laser melting (SLM) and engineering materials, such as low-melting eutectic liquid metal (LM) alloys and synthetics ceramics offer new opportunities for thermal cooling systems. Therefore, there has been an opportunity for adapting in-situ AM to overcome limitations of traditional manufacturing in thermal application, where improvements can be achieved through reducing thermal contract resistance of multi-layer interfaces. This dissertation investigates adapting in-situ AM technologies to embed LM compatible prefabricated components, such as ceramic tubes, inside of metals without the need for a parting surface, resulting in more intimate contact between the metal and ceramic and a reduction in the interfacial thermal resistance. A focus was placed on using more ubiquitous powder bed AM technologies, where it was determined that the morphology of the prefabricated LM compatible ceramic tubes had to be optimized to prevent collision with the apparatus of powder bed based AM. Furthermore, to enhance the wettability of the ceramic tubes during laser fusion, the surfaces were electroplated, resulting in a 1.72X improvement in heat transfer compared to cold plates packaged by conventional assembly. Additionally, multiple AM technologies synergistically complement with cross platform tools such as magnetohydrodynamic (MHD) to solve the corrosion problem in the use of low melting eutectic alloy in geometrically complex patterns as an active cooling system with no moving parts. The MHD pumping system was designed using FEA and CFD simulations to approximate Maxwell and Navier-Stokes equations, were then validated using experiments with model heat exchanger to determine the tradeoff in performance with conventional pumping systems. The MHD cooling prototype was shown to reach volumetric flow rates of up to 650 mm3/sec and generated flow pressure due to Lorentz forces of up to 230 Pa, resulting in heat transfer improvement relative to passive prototype of 1.054
Integrating supercapacitors into a hybrid energy system to reduce overall costs using the genetic algorithm (GA) and support vector machine (SVM)
This research deals with optimising a supercapacitor-battery hybrid energy storage system (SB-HESS) to reduce the implementation cost for solar energy applications using the Genetic Algorithm (GA) and the Support Vector Machine (SVM). The integration of a supercapacitor into a battery energy storage system for solar applications is proven to prolong the battery lifespan. Furthermore, the reliability of the system was optimised using a GA within the Taguchi technique in the supercapacitor fabrication process. This is important to reduce the spread in tolerance of supercapacitors values (i.e. capacitance and Equivalent Series Resistance (ESR)) which affect system performance.
One of the more important results obtained in this project is the net present cost (NPC) of the Supercapacitor-battery hybrid energy storage system is 7.51% lower than the conventional battery only system over a 20-years project lifetime. This NPC takes into account of components initial capital cost, replacement cost, maintenance and operational cost. The number of batteries is reduced from 40 (conventional – battery only system) to 24 (SB-HESS) with the inclusion of supercapacitors in the system. This leads to reduction cost in the implemented hybrid energy storage system. A greener renewable energy system is achievable as the number of battery is reduced significantly. An optimised combination of the number of components for renewable energy system is also found. The number of batteries is sized, based on the average power output instead of catering to the peak power burst as in a conventional battery only system. This allows for the reduction in the number of batteries as the peak power is catered for by the presence of the supercapacitor. Subsequent efforts have been focused on the energy management system which is coupled with a supervised learning machine – SVM, switches and sensors are used to forecast the load demand beforehand. This load predictive-energy management system is implemented on a lab-scaled hybrid energy storage system prototype. Results obtained also show that this load predictive system allows for accurate load classification and prediction. The supercapacitor in the hybrid energy storage system is able to switch on to cater for peak power without delay. This is crucial in maintaining an optimised battery depth-of-discharge (DOD) in order to reduce the rate of battery damage thru a degradation mechanism which is caused from particular stress factors (especially sulphation on the battery electrode and electrolyte stratification)
SCEE 2008 book of abstracts : the 7th International Conference on Scientific Computing in Electrical Engineering (SCEE 2008), September 28 – October 3, 2008, Helsinki University of Technology, Espoo, Finland
This report contains abstracts of presentations given at the SCEE 2008 conference.reviewe
Network-on-Chip
Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems
Electric Vehicle Efficient Power and Propulsion Systems
Vehicle electrification has been identified as one of the main technology trends in this second decade of the 21st century. Nearly 10% of global car sales in 2021 were electric, and this figure would be 50% by 2030 to reduce the oil import dependency and transport emissions in line with countries’ climate goals. This book addresses the efficient power and propulsion systems which cover essential topics for research and development on EVs, HEVs and fuel cell electric vehicles (FCEV), including: Energy storage systems (battery, fuel cell, supercapacitors, and their hybrid systems); Power electronics devices and converters; Electric machine drive control, optimization, and design; Energy system advanced management methods Primarily intended for professionals and advanced students who are working on EV/HEV/FCEV power and propulsion systems, this edited book surveys state of the art novel control/optimization techniques for different components, as well as for vehicle as a whole system. New readers may also find valuable information on the structure and methodologies in such an interdisciplinary field. Contributed by experienced authors from different research laboratory around the world, these 11 chapters provide balanced materials from theorical background to methodologies and practical implementation to deal with various issues of this challenging technology. This reprint encourages researchers working in this field to stay actualized on the latest developments on electric vehicle efficient power and propulsion systems, for road and rail, both manned and unmanned vehicles
Recommended from our members
Wireless Sensor Network for Advanced Energy Management Solutions
Eaton has developed an advanced energy management solution that has been deployed to several Industries of the Future (IoF) sites. This demonstrated energy savings and reduced unscheduled downtime through an improved means for performing predictive diagnostics and energy efficiency estimation. Eaton has developed a suite of online, continuous, and inferential algorithms that utilize motor current signature analysis (MCSA) and motor power signature analysis (MPSA) techniques to detect and predict the health condition and energy usage condition of motors and their connect loads. Eaton has also developed a hardware and software platform that provided a means to develop and test these advanced algorithms in the field. Results from lab validation and field trials have demonstrated that the developed advanced algorithms are able to detect motor and load inefficiency and performance degradation. Eaton investigated the performance of Wireless Sensor Networks (WSN) within various industrial facilities to understand concerns about topology and environmental conditions that have precluded broad adoption by the industry to date. A Wireless Link Assessment System (WLAS), was used to validate wireless performance under a variety of conditions. Results demonstrated that wireless networks can provide adequate performance in most facilities when properly specified and deployed. Customers from various IoF expressed interest in applying wireless more broadly for selected applications, but continue to prefer utilizing existing, wired field bus networks for most sensor based applications that will tie into their existing Computerized Motor Maintenance Systems (CMMS). As a result, wireless technology was de-emphasized within the project, and a greater focus placed on energy efficiency/predictive diagnostics. Commercially available wireless networks were only utilized in field test sites to facilitate collection of motor wellness information, and no wireless sensor network products were developed under this project. As an outgrowth of this program, Eaton developed a patented energy-optimizing drive control technology that is complementary to a traditional variable frequency drives (VFD) to enable significant energy savings for motors with variable torque applications, such as fans, pumps, and compressors. This technology provides an estimated energy saving of 2%-10% depending on the loading condition, in addition to the savings obtained from a traditional VFD. The combination of a VFD with the enhanced energy-optimizing controls will provide significant energy savings (10% to 70% depending on the load and duty cycle) for motors that are presently connected with across the line starters. It will also provide a more favorable return on investment (ROI), thus encouraging industries to adopt VFDs for more motors within their facilities. The patented technology is based on nonintrusive algorithms that estimate the instantaneous operating efficiency and motor speed and provide active energy-optimizing control of a motor, using only existing voltage and current sensors. This technology is currently being commercialized by Eaton’s Industrial Controls Division in their next generation motor control products. Due to the common nonintrusive and inferential nature of various algorithms, this same product can also include motor and equipment condition monitoring features, providing the facility owner additional information to improve process uptime and the associated energy savings. Calculations estimated potential energy savings of 261,397GWh/Yr (7500/hr, with large, critical processes reaching $50-100k/hr. Specific downtime costs are not included in this report because of customer confidentiality, but projected savings across the Industries of the Future (IoF) are still expected to be comparable to the original program estimates. Two generations of customer field deployments and evaluation have been completed during the course of this project. Results from these customer sites have been used for identifying the scope and improving the developed energy and wellness algorithms. The field deployments have confirmed that the hardware for sensing and sampling motor currents and voltages are reliable and able to provide an adequate signal-to-noise ratio from the electrical noise present on the motor signals
Efficiency and Optimization of Buildings Energy Consumption: Volume II
This reprint, as a continuation of a previous Special Issue entitled “Efficiency and Optimization of Buildings Energy Consumption”, gives an up-to-date overview of new technologies based on Machine Learning (ML) and Internet of Things (IoT) procedures to improve the mathematical approach of algorithms that allow control systems to be improved with the aim of reducing housing sector energy consumption
- …