177 research outputs found

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Fabrication, Characterization and Integration of Resistive Random Access Memories

    Get PDF
    The functionalities and performances of today's computing systems are increasingly dependent on the memory block. This phenomenon, also referred as the Von Neumann bottleneck, is the main motivation for the research on memory technologies. Despite CMOS technology has been improved in the last 50 years by continually increasing the device density, today's mainstream memories, such as SRAM, DRAM and Flash, are facing fundamental limitations to continue this trend. These memory technologies, based on charge storage mechanisms, are suffering from the easy loss of the stored state for devices scaled below 10 nm. This results in a degradation of the performance, reliability and noise margin. The main motivation for the development of emerging non volatile memories is the study of a different mechanism to store the digital state in order to overcome this challenge. Among these emerging technologies, one of the strongest candidate is Resistive Random Access Memory (ReRAM), which relies on the formation or rupture of a conductive filament inside a dielectric layer. This thesis focuses on the fabrication, characterization and integration of ReRAM devices. The main subject is the qualitative and quantitative description of the main factors that influence the resistive memory electrical behavior. Such factors can be related either to the memory fabrication or to the test environment. The first category includes variations in the fabrication process steps, in the device geometry or composition. We discuss the effect of each variation, and we use the obtained database to gather insights on the ReRAM working mechanism and the adopted methodology by using statistical methods. The second category describes how differences in the electrical stimuli sent to the device change the memory performances. We show how these factors can influence the memory resistance states, and we propose an empirical model to describe such changes. We also discuss how it is possible to control the resistance states by modulating the number of input pulses applied to the device. In the second part of this work, we present the integration of the fabricated devices in a CMOS technology environment. We discuss a Verilog-A model used to simulate the device characteristics, and we show two solutions to limit the sneak-path currents for ReRAM crossbars: a dedicated read circuit and the development of selector devices. We describe the selector fabrication, as well as the electrical characterization and the combination with our ReRAMs in a 1S1R configuration. Finally, we show two methods to integrate ReRAM devices in the BEoL of CMOS chips

    An investigation into computer and network curricula

    Get PDF
    This thesis consists of a series of internationally published, peer reviewed, journal and conference research papers that analyse the educational and training needs of undergraduate Information Technology (IT) students within the area of Computer and Network Technology (CNT) Education. Research by Maj et al has found that accredited computing science curricula can fail to meet the expectations of employers in the field of CNT: “It was found that none of these students could perform first line maintenance on a Personal Computer (PC) to a professional standard with due regard to safety, both to themselves and the equipment. Neither could they install communication cards, cables and network operating system or manage a population of networked PCs to an acceptable commercial standard without further extensive training. It is noteworthy that none of the students interviewed had ever opened a PC. It is significant that all those interviewed for this study had successfully completed all the units on computer architecture and communication engineering (Maj, Robbins, Shaw, & Duley, 1998). The students\u27 curricula at that time lacked units in which they gained hands-on experience in modern PC hardware or networking skills. This was despite the fact that their computing science course was level one accredited, the highest accreditation level offered by the Australian Computer Society (ACS). The results of the initial survey in Western Australia led to the introduction of two new units within the Computing Science Degree at Edith Cowan University (ECU), Computer Installation & Maintenance (CIM) and Network Installation & Maintenance (NIM) (Maj, Fetherston, Charlesworth, & Robbins, 1998). Uniquely within an Australian university context these new syllabi require students to work on real equipment. Such experience excludes digital circuit investigation, which is still a recommended approach by the Association for Computing Machinery (ACM) for computer architecture units (ACM, 2001, p.97). Instead, the CIM unit employs a top-down approach based initially upon students\u27 everyday experiences, which is more in accordance with constructivist educational theory and practice. These papers propose an alternate model of IT education that helps to accommodate the educational and vocational needs of IT students in the context of continual rapid changes and developments in technology. The ACM have recognised the need for variation noting that: There are many effective ways to organize a curriculum even for a particular set of goals and objectives (Tucker et al., 1991, p.70). A possible major contribution to new knowledge of these papers relates to how high level abstract bandwidth (B-Node) models may contribute to the understanding of why and how computer and networking technology systems have developed over time. Because these models are de-coupled from the underlying technology, which is subject to rapid change, these models may help to future-proof student knowledge and understanding of the ongoing and future development of computer and networking systems. The de-coupling is achieved through abstraction based upon bandwidth or throughput rather than the specific implementation of the underlying technologies. One of the underlying problems is that computing systems tend to change faster than the ability of most educational institutions to respond. Abstraction and the use of B-Node models could help educational models to more quickly respond to changes in the field, and can also help to introduce an element of future-proofing in the education of IT students. The importance of abstraction has been noted by the ACM who state that: Levels of Abstraction: the nature and use of abstraction in computing; the use of abstraction in managing complexity, structuring systems, hiding details, and capturing recurring patterns; the ability to represent an entity or system by abstractions having different levels of detail and specificity (ACM, 1991b). Bloom et al note the importance of abstraction, listing under a heading of: “Knowledge of the universals and abstractions in a field” the objective: Knowledge of the major schemes and patterns by which phenomena and ideas arc organized. These are large structures, theories, and generalizations which dominate a subject or field or problems. These are the highest levels of abstraction and complexity\u27\u27 (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956, p. 203). Abstractions can be applied to computer and networking technology to help provide students with common fundamental concepts regardless of the particular underlying technological implementation to help avoid the rapid redundancy of a detailed knowledge of modem computer and networking technology implementation and hands-on skills acquisition. Again the ACM note that: “Enduring computing concepts include ideas that transcend any specific vendor, package or skill set... While skills are fleeting, fundamental concepts are enduring and provide long lasting benefits to students, critically important in a rapidly changing discipline (ACM, 2001, p.70) These abstractions can also be reinforced by experiential learning to commercial practices. In this context, the other possibly major contribution of new knowledge provided by this thesis is an efficient, scalable and flexible model for assessing hands-on skills and understanding of IT students. This is a form of Competency-Based Assessment (CBA), which has been successfully tested as part of this research and subsequently implemented at ECU. This is the first time within this field that this specific type of research has been undertaken within the university sector within Australia. Hands-on experience and understanding can become outdated hence the need for future proofing provided via B-Nodes models. The three major research questions of this study are: •Is it possible to develop a new, high level abstraction model for use in CNT education? •Is it possible to have CNT curricula that are more directly relevant to both student and employer expectations without suffering from rapid obsolescence? •Can WI effective, efficient and meaningful assessment be undertaken to test students\u27 hands-on skills and understandings? The ACM Special Interest Group on Data Communication (SJGCOMM) workshop report on Computer Networking, Curriculum Designs and Educational Challenges, note a list of teaching approaches: ... the more \u27hands-on\u27 laboratory approach versus the more traditional in-class lecture-based approach; the bottom-up approach towards subject matter verus the top-down approach (Kurose, Leibeherr, Ostermann, & Ott-Boisseau, 2002, para 1). Bandwidth considerations are approached from the PC hardware level and at each of the seven layers of the International Standards Organisation (ISO) Open Systems Interconnection (OSI) reference model. It is believed that this research is of significance to computing education. However, further research is needed

    Robust design of deep-submicron digital circuits

    Get PDF
    Avec l'augmentation de la probabilité de fautes dans les circuits numériques, les systèmes développés pour les environnements critiques comme les centrales nucléaires, les avions et les applications spatiales doivent être certifies selon des normes industrielles. Cette thèse est un résultat d'une cooperation CIFRE entre l'entreprise Électricité de France (EDF) R&D et Télécom Paristech. EDF est l'un des plus gros producteurs d'énergie au monde et possède de nombreuses centrales nucléaires. Les systèmes de contrôle-commande utilisé dans les centrales sont basés sur des dispositifs électroniques, qui doivent être certifiés selon des normes industrielles comme la CEI 62566, la CEI 60987 et la CEI 61513 à cause de la criticité de l'environnement nucléaire. En particulier, l'utilisation des dispositifs programmables comme les FPGAs peut être considérée comme un défi du fait que la fonctionnalité du dispositif est définie par le concepteur seulement après sa conception physique. Le travail présenté dans ce mémoire porte sur la conception de nouvelles méthodes d'analyse de la fiabilité aussi bien que des méthodes d'amélioration de la fiabilité d'un circuit numérique.The design of circuits to operate at critical environments, such as those used in control-command systems at nuclear power plants, is becoming a great challenge with the technology scaling. These circuits have to pass through a number of tests and analysis procedures in order to be qualified to operate. In case of nuclear power plants, safety is considered as a very high priority constraint, and circuits designed to operate under such critical environment must be in accordance with several technical standards such as the IEC 62566, the IEC 60987, and the IEC 61513. In such standards, reliability is treated as a main consideration, and methods to analyze and improve the circuit reliability are highly required. The present dissertation introduces some methods to analyze and to improve the reliability of circuits in order to facilitate their qualification according to the aforementioned technical standards. Concerning reliability analysis, we first present a fault-injection based tool used to assess the reliability of digital circuits. Next, we introduce a method to evaluate the reliability of circuits taking into account the ability of a given application to tolerate errors. Concerning reliability improvement techniques, first two different strategies to selectively harden a circuit are proposed. Finally, a method to automatically partition a TMR design based on a given reliability requirement is introduced.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    Digital design techniques for dependable High-Performance Computing

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Soft-Error Resilience Framework For Reliable and Energy-Efficient CMOS Logic and Spintronic Memory Architectures

    Get PDF
    The revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error (SE) dependability issues. Consequently, soft-error mitigation techniques have become essential to improve systems\u27 reliability. Herein, first, we proposed optimized soft-error resilience designs to improve robustness of sub-micron computing systems. The proposed approaches were developed to deliver energy-efficiency and tolerate double/multiple errors simultaneously while incurring acceptable speed performance degradation compared to the prior work. Secondly, the impact of Process Variation (PV) at the Near-Threshold Voltage (NTV) region on redundancy-based SE-mitigation approaches for High-Performance Computing (HPC) systems was investigated to highlight the approach that can realize favorable attributes, such as reduced critical datapath delay variation and low speed degradation. Finally, recently, spin-based devices have been widely used to design Non-Volatile (NV) elements such as NV latches and flip-flops, which can be leveraged in normally-off computing architectures for Internet-of-Things (IoT) and energy-harvesting-powered applications. Thus, in the last portion of this dissertation, we design and evaluate for soft-error resilience NV-latching circuits that can achieve intriguing features, such as low energy consumption, high computing performance, and superior soft errors tolerance, i.e., concurrently able to tolerate Multiple Node Upset (MNU), to potentially become a mainstream solution for the aerospace and avionic nanoelectronics. Together, these objectives cooperate to increase energy-efficiency and soft errors mitigation resiliency of larger-scale emerging NV latching circuits within iso-energy constraints. In summary, addressing these reliability concerns is paramount to successful deployment of future reliable and energy-efficient CMOS logic and spintronic memory architectures with deeply-scaled devices operating at low-voltages

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    Convertisseurs modulaires multiniveaux pour le transport d'Ă©nergie Ă©lectrique en courant continu haute tension

    Get PDF
    Les travaux présentés dans ce mémoire ont été réalisés dans le cadre d’une collaboration entre le LAboratoire PLAsma et Conversion d’Énergie (LAPLACE), Université de Toulouse, et la Seconde Université de Naples (SUN). Ce travail a reçu le soutien de la société Rongxin Power Electronics (Chine) et traite de l’utilisation des convertisseurs multi-niveaux pour le transport d’énergie électrique en courant continu Haute Tension (HVDC). Depuis plus d’un siècle, la génération, la transmission, la distribution et l’utilisation de l’énergie électrique sont principalement basées sur des systèmes alternatifs. Les systèmes HVDC ont été envisagés pour des raisons techniques et économiques dès les années 60. Aujourd’hui il est unanimement reconnu que ces systèmes de transport d’électricité sont plus appropriés pour les lignes aériennes au-delà de 800 km de long. Cette distance limite de rentabilité diminue à 50 km pour les liaisons enterrées ou sous-marines. Les liaisons HVDC constituent un élément clé du développement de l’énergie électrique verte pour le XXIème siècle. En raison des limitations en courant des semi-conducteurs et des câbles électriques, les applications à forte puissance nécessitent l’utilisation de convertisseurs haute tension (jusqu’à 500 kV). Grâce au développement de composants semi-conducteurs haute tension et aux architectures multicellulaires, il est désormais possible de réaliser des convertisseurs AC/DC d’une puissance allant jusqu’au GW. Les convertisseurs multi-niveaux permettent de travailler en haute tension tout en délivrant une tension quasi-sinusoïdale. Les topologies multi-niveaux classiques de type NPC ou « Flying Capacitor » ont été introduites dans les années 1990 et sont aujourd’hui couramment utilisées dans les applications de moyenne puissance comme les systèmes de traction. Dans le domaine des convertisseurs AC/DC haute tension, la topologie MMC (Modular Multilevel Converter), proposée par le professeur R. Marquardt (Université de Munich, Allemagne) il y a dix ans, semble particulièrement intéressante pour les liaisons HVDC. Sur le principe d’une architecture de type MMC, le travail de cette thèse propose différentes topologies de blocs élémentaires permettant de rendre le convertisseur AC/DC haute tension plus flexible du point de vue des réversibilités en courant et en tension. Ce document est organisé de la manière suivante. Les systèmes HVDC actuellement utilisés sont tout d’abord présentés. Les configurations conventionnelles des convertisseurs de type onduleur de tension (VSCs) ou de type onduleur de courant (CSCs) sont introduites et les topologies pour les systèmes VSC sont ensuite plus particulièrement analysées. Le principe de fonctionnement de la topologie MMC est ensuite présenté et le dimensionnement des éléments réactifs est développé en considérant une commande en boucle ouverte puis une commande en boucle fermée. Plusieurs topologies de cellules élémentaires sont proposées afin d’offrir différentes possibilités de réversibilité du courant ou de la tension du côté continu. Afin de comparer ces structures, une approche analytique de l’estimation des pertes est développée. Elle permet de réaliser un calcul rapide et direct du rendement du système. Une étude de cas est réalisée en considérant la connexion HVDC d’une plateforme éolienne off-shore. La puissance nominale du système étudié est de 100 MW avec une tension de bus continu égale à 160 kV. Les différentes topologies MMC sont évaluées en utilisant des IGBT ou des IGCT en boitier pressé. Les simulations réalisées valident l’approche analytique faite précédemment et permettent également d’analyser les modes de défaillance. L’étude est menée dans le cas d’une commande MLI classique avec entrelacement des porteuses. Enfin, un prototype triphasé de 10kW est mis en place afin de valider les résultats obtenus par simulation. Le système expérimental comporte 18 cellules de commutations et utilise une plate-forme DSP-FPGA pour l’implantation des algorithmes de commande. ABSTRACT : This work was performed in the frame of collaboration between the Laboratory on Plasma and Energy Conversion (LAPLACE), University of Toulouse, and the Second University of Naples (SUN). This work was supported by Rongxin Power Electronic Company (China) and concerns the use of multilevel converters in High Voltage Direct Current (HVDC) transmission. For more than one hundred years, the generation, the transmission, distribution and uses of electrical energy were principally based on AC systems. HVDC systems were considered some 50 years ago for technical and economic reasons. Nowadays, it is well known that HVDC is more convenient than AC for overhead transmission lines from 800 - 1000 km long. This break-even distance decreases up to 50 km for underground or submarine cables. Over the twenty-first century, HVDC transmissions will be a key point in green electric energy development. Due to the limitation in current capability of semiconductors and electrical cables, high power applications require high voltage converters. Thanks to the development of high voltage semiconductor devices, it is now possible to achieve high power converters for AC/DC conversion in the GW power range. For several years, multilevel voltage source converters allow working at high voltage level and draw a quasi-sinusoidal voltage waveform. Classical multilevel topologies such as NPC and Flying Capacitor VSIs were introduced twenty years ago and are nowadays widely used in Medium Power applications such as traction drives. In the scope of High Voltage AC/DC converters, the Modular Multilevel Converter (MMC), proposed ten years ago by Professor R. Marquardt from the University of Munich (Germany), appeared particularly interesting for HVDC transmissions. On the base of the MMC principle, this thesis considers different topologies of elementary cells which make the High Voltage AC/DC converter more flexible and easy suitable respect to different voltage and current levels. The document is organized as follow. Firstly, HVDC power systems are introduced. Conventional configurations of Current Source Converters (CSCs) and Voltage Source Converters (VSCs) are shown. The most attractive topologies for VSC-HVDC systems are analyzed. The operating principle of the MMC is presented and the sizing of reactive devices is developed by considering an open loop and a closed loop control. Different topologies of elementary cells offer various properties in current or voltage reversibility on the DC side. To compare the different topologies, an analytical approach on the power losses evaluation is achieved which made the calculation very fast and direct. A HVDC link to connect an off-shore wind farm platform is considered as a case study. The nominal power level is 100 MW with a DC voltage of 160 kV. The MMC is rated considering press-packed IGBT and IGCT devices. Simulations validate the calculations and also allow analyzing fault conditions. The study is carried out by considering a classical PWM control with an interleaving of the cells. In order to validate calculation and the simulation results, a 10kW three-phase prototype was built. It includes 18 commutation cells and its control system is based on a DSP-FGPA platform
    • …
    corecore