220 research outputs found

    A Holistic Approach to Functional Safety for Networked Cyber-Physical Systems

    Get PDF
    Functional safety is a significant concern in today's networked cyber-physical systems such as connected machines, autonomous vehicles, and intelligent environments. Simulation is a well-known methodology for the assessment of functional safety. Simulation models of networked cyber-physical systems are very heterogeneous relying on digital hardware, analog hardware, and network domains. Current functional safety assessment is mainly focused on digital hardware failures while minor attention is devoted to analog hardware and not at all to the interconnecting network. In this work we believe that in networked cyber-physical systems, the dependability must be verified not only for the nodes in isolation but also by taking into account their interaction through the communication channel. For this reason, this work proposes a holistic methodology for simulation-based safety assessment in which safety mechanisms are tested in a simulation environment reproducing the high-level behavior of digital hardware, analog hardware, and network communication. The methodology relies on three main automatic processes: 1) abstraction of analog models to transform them into system-level descriptions, 2) synthesis of network infrastructures to combine multiple cyber-physical systems, and 3) multi-domain fault injection in digital, analog, and network. Ultimately, the flow produces a homogeneous optimized description written in C++ for fast and reliable simulation which can have many applications. The focus of this thesis is performing extensive fault simulation and evaluating different functional safety metrics, \eg, fault and diagnostic coverage of all the safety mechanisms

    Moving Towards Analog Functional Safety

    Get PDF
    Over the past century, the exponential growth of the semiconductor industry has led to the creation of tiny and complex integrated circuits, e.g., sensors, actuators, and smart power systems. Innovative techniques are needed to ensure the correct functionality of analog devices that are ubiquitous in every smart system. The standard ISO 26262 related to functional safety in the automotive context specifies that fault injection is necessary to validate all electronic devices. For decades, standardizing fault modeling, injection and simulation mainly focused on digital circuits and disregarding analog ones. An initial attempt is being made with the IEEE P2427 standard draft standard that started to give this field a structured and formal organization. In this context, new fault models, injection, and abstraction methodologies for analog circuits are proposed in this thesis to enhance this application field. The faults proposed by the IEEE P2427 standard draft standard are initially evaluated to understand the associated fault behaviors during the simulation. Moreover, a novel approach is presented for modeling realistic stuck-on/off defects based on oxide defects. These new defects proposed are required because digital stuck-at-fault models where a transistor is frozen in on-state or offstate may not apply well on analog circuits because even a slight variation could create deviations of several magnitudes. Then, for validating the proposed defects models, a novel predictive fault grouping based on faulty AC matrices is applied to group faults with equivalent behaviors. The proposed fault grouping method is computationally cheap because it avoids performing DC or transient simulations with faults injected and limits itself to faulty AC simulations. Using AC simulations results in two different methods that allow grouping faults with the same frequency response are presented. The first method is an AC-based grouping method that exploits the potentialities of the S-parameters ports. While the second is a Circle-based grouping based on the circle-fitting method applied to the extracted AC matrices. Finally, an open-source framework is presented for the fault injection and manipulation perspective. This framework relies on the shared semantics for reading, writing, or manipulating transistor-level designs. The ultimate goal of the framework is: reading an input design written in a specific syntax and then allowing to write the same design in another syntax. As a use case for the proposed framework, a process of analog fault injection is discussed. This activity requires adding, removing, or replacing nodes, components, or even entire sub-circuits. The framework is entirely written in C++, and its APIs are also interfaced with Python. The entire framework is open-source and available on GitHub. The last part of the thesis presents abstraction methodologies that can abstract transistor level models into Verilog-AMS models and Verilog- AMS piecewise and nonlinear models into C++. These abstracted models can be integrated into heterogeneous systems. The purpose of integration is the simulation of heterogeneous components embedded in a Virtual Platforms (VP) needs to be fast and accurate

    Fault-based Analysis of Industrial Cyber-Physical Systems

    Get PDF
    The fourth industrial revolution called Industry 4.0 tries to bridge the gap between traditional Electronic Design Automation (EDA) technologies and the necessity of innovating in many indus- trial fields, e.g., automotive, avionic, and manufacturing. This complex digitalization process in- volves every industrial facility and comprises the transformation of methodologies, techniques, and tools to improve the efficiency of every industrial process. The enhancement of functional safety in Industry 4.0 applications needs to exploit the studies related to model-based and data-driven anal- yses of the deployed Industrial Cyber-Physical System (ICPS). Modeling an ICPS is possible at different abstraction levels, relying on the physical details included in the model and necessary to describe specific system behaviors. However, it is extremely complicated because an ICPS is com- posed of heterogeneous components related to different physical domains, e.g., digital, electrical, and mechanical. In addition, it is also necessary to consider not only nominal behaviors but even faulty behaviors to perform more specific analyses, e.g., predictive maintenance of specific assets. Nevertheless, these faulty data are usually not present or not available directly from the industrial machinery. To overcome these limitations, constructing a virtual model of an ICPS extended with different classes of faults enables the characterization of faulty behaviors of the system influenced by different faults. In literature, these topics are addressed with non-uniformly approaches and with the absence of standardized and automatic methodologies for describing and simulating faults in the different domains composing an ICPS. This thesis attempts to overcome these state-of-the-art gaps by proposing novel methodologies, techniques, and tools to: model and simulate analog and multi-domain systems; abstract low-level models to higher-level behavioral models; and monitor industrial systems based on the Industrial Internet of Things (IIOT) paradigm. Specifically, the proposed contributions involve the exten- sion of state-of-the-art fault injection practices to improve the ICPSs safety, the development of frameworks for safety operations automatization, and the definition of a monitoring framework for ICPSs. Overall, fault injection in analog and digital models is the state of the practice to en- sure functional safety, as mentioned in the ISO 26262 standard specific for the automotive field. Starting from state-of-the-art defects defined for analog descriptions, new defects are proposed to enhance the IEEE P2427 draft standard for analog defect modeling and coverage. Moreover, dif- ferent techniques to abstract a transistor-level model to a behavioral model are proposed to speed up the simulation of faulty circuits. Therefore, unlike the electrical domain, there is no extensive use of fault injection techniques in the mechanical one. Thus, extending the fault injection to the mechanical and thermal fields allows for supporting the definition and evaluation of more reliable safety mechanisms. Hence, a taxonomy of mechanical faults is derived from the electrical domain by exploiting the physical analogies. Furthermore, specific tools are built for automatically instru- menting different descriptions with multi-domain faults. The entire work is proposed as a basis for supporting the creation of increasingly resilient and secure ICPS that need to preserve functional safety in any operating context

    Multi-Domain Fault Models Covering the Analog Side of a Smart or Cyber-Physical System

    Get PDF
    Over the last decade, the industrial world has been involved in a massive revolution guided by the adoption of digital technologies. In this context, complex systems like cyber-physical systems play a fundamental role since they were designed and realized by composing heterogeneous components. The combined simulation of the behavioral models of these components allows to reproduce the nominal behavior of the real system. Similarly, a smart system is a device that integrates heterogeneous components but in a miniaturized form factor. The development of smart or cyber-physical systems, in combination with faulty behaviors modeled for the different physical domains composing the system, enables to support advanced functional safety assessment at the system level. A methodology to create and inject multi-domain fault models in the analog side of these systems has been proposed by exploiting the physical analogy between the electrical and mechanical domains to infer a new mechanical fault taxonomy. Thus, standard electrical fault models are injected into the electrical part, while the derived mechanical fault models are injected directly into the mechanical part. The entire flow has been applied to two case studies: a direct current motor connected with a gear train, and a three-axis accelerometer

    A Cross-level Verification Methodology for Digital IPs Augmented with Embedded Timing Monitors

    Get PDF
    Smart systems implement the leading technology advances in the context of embedded devices. Current design methodologies are not suitable to deal with tightly interacting subsystems of different technological domains, namely analog, digital, discrete and power devices, MEMS and power sources. The interaction effects between the components and between the environment and the system must be modeled and simulated at system level to achieve high performance. Focusing on digital subsystem, additional design constraints have to be considered as a result of the integration of multi-domain subsystems in a single device. The main digital design challenges combined with those emerging from the heterogeneous nature of the whole system directly impact on performance, hence propagation delay, of the digital component. In this paper we propose a design approach to enhance the RTL model of a given digital component for the integration in smart systems, and a methodology to verify the added features at system-level. The design approach consists of ``augmenting'' the RTL model through the automatic insertion of delay sensors, which are capable of detecting and correcting timing failures. The verification methodology consists of an automatic flow of two steps. Firstly the augmented model is abstracted to system-level (i.e., SystemC TLM); secondly mutants, which are code mutations to emulate timing failures, are automatically injected into the abstracted model. Experimental results demonstrate the applicability of the proposed design and verification methodology and the effectiveness of the simulation performance

    Detection, diagnosis and modeling of ESD-induced soft failures - a gate-level and mixed-signal approach

    Get PDF
    Electronic systems are an indispensable part of people's lives today. However, the reliability of electronic systems can be threatened by external stimuli such as Electrostatic Discharges (ESDs). ESDs can either physically damage an electronic system or let it malfunction without damaging it. Therefore, a lot of design work and qualification testings are needed by manufacturers to improve the robustness against the negative effects of ESDs. The trial-and-error based solution implementation has incurred huge costs to companies in terms of labor and time. Despite the ever-increasing effort being devoted to solving ESD-related problems, cases of field returns still happen, and a significant portion can be attributed to soft failure induced by system-level ESD. Despite that, the ESD-induced permanent failures are well-studied and protection mechanisms have proven to work, the studies on ESD-induced soft failures are all on the physical and transistor level. In this thesis, we studied ESD-induced soft failures by first conducting case studies of injecting ESDs into physical devices and observing the application level symptoms of the failures, and then performing simulation-based ESD injections on a well-known instruction-set-architecture. For the first time, we correlated the physical level ESD event to high-level system behavior. We implemented a mixed-signal-simulation-based fault injection environment and device models to allow ESDs to be injected to target systems. By injecting different types of ESDs into the target system, we, for the first time, identified gate-level bit-flip patterns from a SPICE level high-voltage event. Our experimental results show that the extent of register value corruption can be single-bit or widespread, and the bit flips manifested can affect the system in multiple ways. We also demonstrated low-cost protection measures for some of the failures resulted

    Low-cost error detection through high-level synthesis

    Get PDF
    System-on-chip design is becoming increasingly complex as technology scaling enables more and more functionality on a chip. This scaling and complexity has resulted in a variety of reliability and validation challenges including logic bugs, hot spots, wear-out, and soft errors. To make matters worse, as we reach the limits of Dennard scaling, efforts to improve system performance and energy efficiency have resulted in the integration of a wide variety of complex hardware accelerators in SoCs. Thus the challenge is to design complex, custom hardware that is efficient, but also correct and reliable. High-level synthesis shows promise to address the problem of complex hardware design by providing a bridge from the high-productivity software domain to the hardware design process. Much research has been done on high-level synthesis efficiency optimizations. This thesis shows that high-level synthesis also has the power to address validation and reliability challenges through two solutions. One solution for circuit reliability is modulo-3 shadow datapaths: performing lightweight shadow computations in modulo-3 space for each main computation. We leverage the binding and scheduling flexibility of high-level synthesis to detect control errors through diverse binding and minimize area cost through intelligent checkpoint scheduling and modulo-3 reducer sharing. We introduce logic and dataflow optimizations to further reduce cost. We evaluated our technique with 12 high-level synthesis benchmarks from the arithmetic-oriented PolyBench benchmark suite using FPGA emulated netlist-level error injection. We observe coverages of 99.1% for stuck-at faults, 99.5% for soft errors, and 99.6% for timing errors with a 25.7% area cost and negligible performance impact. Leveraging a mean error detection latency of 12.75 cycles (4150x faster than end result check) for soft errors, we also explore a rollback recovery method with an additional area cost of 28.0%, observing a 175x increase in reliability against soft errors. Another solution for rapid post-silicon validation of accelerator designs is Hybrid Quick Error Detection (H-QED): inserting signature generation logic in a hardware design to create a heavily compressed signature stream that captures the internal behavior of the design at a fine temporal and spatial granularity for comparison with a reference set of signatures generated by high-level simulation to detect bugs. Using H-QED, we demonstrate an improvement in error detection latency (time elapsed from when a bug is activated to when it manifests as an observable failure) of two orders of magnitude and a threefold improvement in bug coverage compared to traditional post-silicon validation techniques. H-QED also uncovered previously unknown bugs in the CHStone benchmark suite, which is widely used by the HLS community. H-QED incurs less than 10% area overhead for the accelerator it validates with negligible performance impact, and we also introduce techniques to minimize any possible intrusiveness introduced by H-QED

    Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

    Full text link
    [ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7.[CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7.[EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA.Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883TESI

    Fiabilisation de convertisseurs analogique-numérique à modulation Sigma-Delta

    Get PDF
    This thesis concentrates on reliability-aware methodology development, reliability analysis based on simulation as well as failure prediction of CMOS 65nm analog and mixed signal (AMS) ICs. Sigma-Delta modulators are concerned as the object of reliability study at system level. A hierarchical statistical approach for reliability is proposed to analysis the performance of Sigma-Delta modulators under ageing effects and process variations. Statistical methods are combined into this analysis flow.Ce travail de thèse a porté sur des problèmes de fiabilité de circuits intégrés en technologie CMOS 65 nm, en particulier sur la conception en vue de la fiabilité, la simulation et l'amélioration de la fiabilité. Les mécanismes dominants de vieillissement HCI et NBTI ainsi que la variation du processus ont été étudiés et évalués quantitativement au niveau du circuit et au niveau du système. Ces méthodes ont été appliquées aux modulateurs Sigma-Delta afin de déterminer la fiabilité de ce type de composant qui est très utilisé
    • …
    corecore