200 research outputs found

    Techniques d'abstraction pour l'analyse et la mitigation des effets dus Ă  la radiation

    Get PDF
    The main objective of this thesis is to develop techniques that can beused to analyze and mitigate the effects of radiation-induced soft errors in industrialscale integrated circuits. To achieve this goal, several methods have been developedbased on analyzing the design at higher levels of abstraction. These techniquesaddress both sequential and combinatorial SER.Fault-injection simulations remain the primary method for analyzing the effectsof soft errors. In this thesis, techniques which significantly speed-up fault-injectionsimulations are presented. Soft errors in flip-flops are typically mitigated by selectivelyreplacing the most critical flip-flops with hardened implementations. Selectingan optimal set to harden is a compute intensive problem and the second contributionconsists of a clustering technique which significantly reduces the number offault-injections required to perform selective mitigation.In terrestrial applications, the effect of soft errors in combinatorial logic hasbeen fairly small. It is known that this effect is growing, yet there exist few techniqueswhich can quickly estimate the extent of combinatorial SER for an entireintegrated circuit. The third contribution of this thesis is a hierarchical approachto combinatorial soft error analysis.Systems-on-chip are often developed by re-using design-blocks that come frommultiple sources. In this context, there is a need to develop and exchange reliabilitymodels. The final contribution of this thesis consists of an application specificmodeling language called RIIF (Reliability Information Interchange Format). Thislanguage is able to model how faults at the gate-level propagate up to the block andchip-level. Work is underway to standardize the RIIF modeling language as well asto extend it beyond modeling of radiation-induced failures.In addition to the main axis of research, some tangential topics were studied incollaboration with other teams. One of these consisted in the development of a novelapproach for protecting ternary content addressable memories (TCAMs), a specialtype of memory important in networking applications. The second supplementalproject resulted in an algorithm for quickly generating approximate redundant logicwhich can protect combinatorial networks against permanent faults. Finally anapproach for reducing the detection time for errors in the configuration RAM forField-Programmable Gate-Arrays (FPGAs) was outlined.Les effets dus à la radiation peuvent provoquer des pannes dans des circuits intégrés. Lorsqu'une particule subatomique, fait se déposer une charge dans les régions sensibles d'un transistor cela provoque une impulsion de courant. Cette impulsion peut alors engendrer l'inversion d'un bit ou se propager dans un réseau de logique combinatoire avant d'être échantillonnée par une bascule en aval.Selon l'état du circuit au moment de la frappe de la particule et selon l'application, cela provoquera une panne observable ou non. Parmi les événements induits par la radiation, seule une petite portion génère des pannes. Il est donc essentiel de déterminer cette fraction afin de prédire la fiabilité du système. En effet, les raisons pour lesquelles une perturbation pourrait être masquée sont multiples, et il est de plus parfois difficile de préciser ce qui constitue une erreur. A cela s'ajoute le fait que les circuits intégrés comportent des milliards de transistors. Comme souvent dans le contexte de la conception assisté par ordinateur, les approches hiérarchiques et les techniques d'abstraction permettent de trouver des solutions.Cette thèse propose donc plusieurs nouvelles techniques pour analyser les effets dus à la radiation. La première technique permet d'accélérer des simulations d'injections de fautes en détectant lorsqu'une faute a été supprimée du système, permettant ainsi d'arrêter la simulation. La deuxième technique permet de regrouper en ensembles les éléments d'un circuit ayant une fonction similaire. Ensuite, une analyse au niveau des ensemble peut être faite, identifiant ainsi ceux qui sont les plus critiques et qui nécessitent donc d'être durcis. Le temps de calcul est ainsi grandement réduit.La troisième technique permet d'analyser les effets des fautes transitoires dans les circuits combinatoires. Il est en effet possible de calculer à l'avance la sensibilité à des fautes transitoires de cellules ainsi que les effets de masquage dans des blocs fréquemment utilisés. Ces modèles peuvent alors être combinés afin d'analyser la sensibilité de grands circuits. La contribution finale de cette thèse consiste en la définition d'un nouveau langage de modélisation appelé RIIF (Reliability Information Ineterchange Format). Ce langage permet de décrire le taux des fautes dans des composants simples en fonction de leur environnement de fonctionnement. Ces composants simples peuvent ensuite être combinés permettant ainsi de modéliser la propagation de leur fautes vers des pannes au niveau système. En outre, l'utilisation d'un langage standard facilite l'échange de données de fiabilité entre les partenaires industriels.Au-delà des contributions principales, cette thèse aborde aussi des techniques permettant de protéger des mémoires associatives ternaires (TCAMs). Les approches classiques de protection (codes correcteurs) ne s'appliquent pas directement. Une des nouvelles techniques proposées consiste à utiliser une structure de données qui peut détecter, d'une manière statistique, quand le résultat n'est pas correct. La probabilité de détection peut être contrôlée par le nombre de bits alloués à cette structure. Une autre technique consiste à utiliser un détecteur de courant embarqué (BICS) afin de diriger un processus de fond directement vers le région touchée par une erreur. La contribution finale consiste en un algorithme qui permet de synthétiser de la logique combinatoire afin de protéger des circuits combinatoires contre les fautes transitoires.Dans leur ensemble, ces techniques facilitent l'analyse des erreurs provoquées par les effets dus à la radiation dans les circuits intégrés, en particulier pour les très grands circuits composés de blocs provenant de divers fournisseurs. Des techniques pour mieux sélectionner les bascules/flip-flops à durcir et des approches pour protéger des TCAMs ont étés étudiées

    Cross layer reliability estimation for digital systems

    Get PDF
    Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost. One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern. Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults. For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability. This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains

    Hardware design and CAD for processor-based logic emulation systems.

    Get PDF

    Multilevel Modeling, Formal Analysis, and Characterization of Single Event Transients Propagation in Digital Systems

    Get PDF
    RÉSUMÉ La croissance exponentielle du nombre de transistors par puce a apporté des progrès considérables aux performances et fonctionnalités des dispositifs semi-conducteurs avec une miniaturisation des dimensions physiques ainsi qu’une augmentation de vitesse. De nos jours, les appareils électroniques utilisés dans un large éventail d’applications telles que les systèmes de divertissement personnels, l’industrie automobile, les systèmes électroniques médicaux, et le secteur financier ont changé notre façon de vivre. Cependant, des études récentes ont démontré que le rétrécissement permanent de la taille des transistors qui s’approchent des dimensions nanométriques fait surgir des défis majeurs. La réduction de la fiabilité au sens large (c.-à-d., la capacité à fournir la fonction attendue) est l’un d’entre eux. Lorsqu’un système est conçu avec une technologie avancée, on s’attend à ce qu’ il connaît plus de défaillances dans sa durée de vie. De telles défaillances peuvent avoir des conséquences graves allant des pertes financières aux pertes humaines. Les erreurs douces induites par la radiation, qui sont apparues d’abord comme une source de panne plutôt exotique causant des anomalies dans les satellites, sont devenues l’un des problèmes les plus difficiles qui influencent la fiabilité des systèmes microélectroniques modernes, y compris les dispositifs terrestres. Dans le secteur médical par exemple, les erreurs douces ont été responsables de l’échec et du rappel de plusieurs stimulateurs cardiaques implantables. En fonction du transistor affecté lors de la fabrication, le passage d’une particule peut induire des perturbations isolées qui se manifestent comme un basculement du contenu d’une cellule de mémoire (c.-à-d., Single Event Upsets (SEU)) ou un changement temporaire de la sortie (sous forme de bruit) dans la logique combinatoire (c.-à-d., Single Event Transients (SETs)). Les SEU ont été largement étudiés au cours des trois dernières décennies, car ils étaient considérés comme la cause principale des erreurs douces. Néanmoins, des études expérimentales ont montré qu’avec plus de miniaturisation technologique, la contribution des SET au taux d’erreurs douces est remarquable et qu’elle peut même dépasser celui des SEU dans les systèmes à haute fréquence [1], [2]. Afin de minimiser l’impact des erreurs douces, l’effet des SET doit être modélisé, prédit et atténué. Toutefois, malgré les progrès considérables accomplis dans la vérification fonctionnelle des circuits numériques, il y a eu très peu de progrès en matiàre de vérification non-fonctionnelle (par exemple, l’analyse des erreurs douces). Ceci est dû au fait que la modélisation et l’analyse des propriétés non-fonctionnelles des SET pose un grand défi. Cela est lié à la nature aléatoire des défauts et à la difficulté de modéliser la variation de leurs caractéristiques lorsqu’ils se propagent.----------ABSTRACT The exponential growth in the number of transistors per chip brought tremendous progress in the performance and the functionality of semiconductor devices associated with reduced physical dimensions and higher speed. Electronic devices used in a wide range of applications such as personal entertainment systems, automotive industry, medical electronic systems, and financial sector changed the way we live nowadays. However, recent studies reveal that further downscaling of the transistor size at nano-scale technology leads to major challenges. Reliability (i.e., ability to provide intended functionality) is one of them, where a system designed in nano-scale nodes is expected to experience more failures in its lifetime than if it was designed using larger technology node size. Such failures can lead to serious conséquences ranging from financial losses to even loss of human life. Soft errors induced by radiation, which were initially considered as a rather exotic failure mechanism causing anomalies in satellites, have become one of the most challenging issues that impact the reliability of modern microelectronic systems, including devices at terrestrial altitudes. For instance, in the medical industry, soft errors have been responsible of the failure and recall of many implantable cardiac pacemakers. Depending on the affected transistor in the design, a particle strike can manifest as a bit flip in a state element (i.e., Single Event Upset (SEU)) or temporally change the output of a combinational gate (i.e., Single Event Transients (SETs)). Initially, SEUs have been widely studied over the last three decades as they were considered to be the main source of soft errors. However, recent experiments show that with further technology downscaling, the contribution of SETs to the overall soft error rate is remarkable and in high frequency systems, it might exceed that of SEUs [1], [2]. In order to minimize the impact of soft errors, the impact of SETs needs to be modeled, predicted, and mitigated. However, despite considerable progress towards developing efficient methodologies for the functional verification of digital designs, advances in non-functional verification (e.g., soft error analysis) have been lagging. This is due to the fact that the modeling and analysis of non-functional properties related to SETs is very challenging. This can be related to the random nature of these faults and the difficulty of modeling the variation in its characteristics while propagating. Moreover, many details about the design structure and the SETs characteristics may not be available at high abstraction levels. Thus, in high level analysis, many assumptions about the SETs behavior are usually made, which impacts the accuracy of the generated results. Consequently, the lowcost detection of soft errors due to SETs is very challenging and requires more sophisticated techniques

    Systematische Transaction-Level-Kommunikations-Modellierung mit SystemC

    Get PDF
    An emerging approach to embedded system design is to assemble them from a library of hardware and software component models (IP, intellectual property) using a system description language, such as SystemC. SystemC allows describing the communication among IPs in terms of abstract operations (transactions). The promise is that with transaction-level modeling (TLM), future systems-on-chip with one billion transistors and more can be composed out of IPs as simply as playing with LEGO bricks. However, reality is far out. In fact, each IP vendor promotes another proprietary interface standard and the provided design tools lack compatibility, such that heterogeneous IPs cannot be integrated efficiently. A novel generic interconnect fabric for TLM is presented which aims at enabling inter-operation between models of different levels of abstraction (mixed-mode) and models with different interfaces (heterogeneous components), with as little overhead as possible. A generic, protocol independent representation of transactions is developed, among with an abstraction level formalism. This approach is shown to support systematic simulation of state-of-the-art buses and networks-on-chip such as IBM CoreConnect and PCI Express over several levels of TLM abstraction. A layered simulation framework for SystemC, GreenBus, is developed to examine the proposed concepts. The thesis discusses new implementation techniques for communication modeling with SystemC which outperform the existing approaches in terms of flexibility, simulation accuracy, and performance. Based on these techniques, advanced concepts for TLM-based hardware/software co-design and FPGA prototyping are examined. Several experiments and a video processor case study highlight the efficiency of the approach and show its applicability in a TLM design flow.Eingebettete Systeme werden zunehmend auf Basis vorgefertigter Hard- und Softwarebausteine entwickelt, die in Form von Modellen (IP, Intellectual Property) vorliegen. Hierzu werden Systembeschreibungssprachen wie SystemC eingesetzt. SystemC ermöglicht, die Kommunikation zwischen IPs durch abstrakte Operationen, sog. Transaktionen zu beschreiben. Mit dieser Transaction-Level-Modellierung (TLM) sollen auch zukünftige Systeme mit 1 Milliarde Transistoren und mehr effizient entwickelt werden können. Idealerweise sollte das Hantieren mit IPs dabei so einfach sein wie das Spielen mit LEGO-Steinen. In der Realität sind jedoch IPs unterschiedlicher Hersteller nicht ohne weiteres integrierbar, und auch die Entwurfswerkzeuge sind nicht kompatibel. In dieser Doktorarbeit wird ein neuer, generischer Ansatz für die Transaction-Level-Modellierung mit SystemC vorgestellt, der Kommunikation zwischen Modellen auf unterschiedlichen Abstraktionsebenen (Mixed-Mode) und mit unterschiedlichen Schnittstellen (heterogene Komponenten) möglich macht. Der zusätzlich benötigte Simulations- und Code-Aufwand ist minimal. Ein protokollunabhängiges Transaktionsmodell und ein formaler Ansatz zur Beschreibung von Abstraktionsebenen werden vorgestellt, mit denen verschiedenartige Busse und Networks-on-Chip wie IBM CoreConnect und PCI Express auf verschiedenen TLM-Abstraktionsebenen simuliert werden können. Ein modulares Simulationsframework für SystemC wird entwickelt (GreenBus), um die vorgeschlagenen Konzepte zu untersuchen. Anhand von GreenBus werden neue Implementierungstechniken diskutiert, die den existierenden Ansätzen in Flexibilität, Simulationsgenauigkeit und -geschwindigkeit überlegen sind. Die Vor- und Nachteile der entwickelten Techniken werden mit Experimenten belegt, und eine Videoprozessor-Fallstudie demonstriert die Effizienz des Ansatzes in einem TLM-basierten Entwurfsfluss

    Soft Error Analysis and Mitigation at High Abstraction Levels

    Get PDF
    Radiation-induced soft errors, as one of the major reliability challenges in future technology nodes, have to be carefully taken into consideration in the design space exploration. This thesis presents several novel and efficient techniques for soft error evaluation and mitigation at high abstract levels, i.e. from register transfer level up to behavioral algorithmic level. The effectiveness of proposed techniques is demonstrated with extensive synthesis experiments

    High-level synthesis of dataflow programs for heterogeneous platforms:design flow tools and design space exploration

    Get PDF
    The growing complexity of digital signal processing applications implemented in programmable logic and embedded processors make a compelling case the use of high-level methodologies for their design and implementation. Past research has shown that for complex systems, raising the level of abstraction does not necessarily come at a cost in terms of performance or resource requirements. As a matter of fact, high-level synthesis tools supporting such a high abstraction often rival and on occasion improve low-level design. In spite of these successes, high-level synthesis still relies on programs being written with the target and often the synthesis process, in mind. In other words, imperative languages such as C or C++, most used languages for high-level synthesis, are either modified or a constrained subset is used to make parallelism explicit. In addition, a proper behavioral description that permits the unification for hardware and software design is still an elusive goal for heterogeneous platforms. A promising behavioral description capable of expressing both sequential and parallel application is RVC-CAL. RVC-CAL is a dataflow programming language that permits design abstraction, modularity, and portability. The objective of this thesis is to provide a high-level synthesis solution for RVC-CAL dataflow programs and provide an RVC-CAL design flow for heterogeneous platforms. The main contributions of this thesis are: a high-level synthesis infrastructure that supports the full specification of RVC-CAL, an action selection strategy for supporting parallel read and writes of list of tokens in hardware synthesis, a dynamic fine-grain profiling for synthesized dataflow programs, an iterative design space exploration framework that permits the performance estimation, analysis, and optimization of heterogeneous platforms, and finally a clock gating strategy that reduces the dynamic power consumption. Experimental results on all stages of the provided design flow, demonstrate the capabilities of the tools for high-level synthesis, software hardware Co-Design, design space exploration, and power optimization for reconfigurable hardware. Consequently, this work proves the viability of complex systems design and implementation using dataflow programming, not only for system-level simulation but real heterogeneous implementations
    • …
    corecore