2,025 research outputs found

    A CONTROLLER AREA NETWORK LAYER FOR RECONFIGURABLE EMBEDDED SYSTEMS

    Get PDF
    Dependable and Fault-tolerant computing is actively being pursued as a research area since the 1980s in various fields involving development of safety-critical applications. The ability of the system to provide reliable functional service as per its design is a key paradigm in dependable computing. For providing reliable service in fault-tolerant systems, dynamic reconfiguration has to be supported to enable recovery from errors (induced by faults) or graceful degradation in case of service failures. Reconfigurable Distributed applications provided a platform to develop fault-tolerant systems and these reconfigurable architectures requires an embedded network that is inherently fault-tolerant and capable of handling movement of tasks between nodes/processors within the system during dynamic reconfiguration. The embedded network should provide mechanisms for deterministic message transfer under faulty environments and support fault detection/isolation mechanisms within the network framework. This thesis describes the design, implementation and validation of an embedded networking layer using Controller Area Network (CAN) to support reconfigurable embedded systems

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Models for Co-Design of Heterogeneous Dynamically Reconfigurable SoCs

    Get PDF
    International audienceThe design of Systems-on-Chip is becoming an increasing difficult challenge due to the continuous exponential evolution of the targeted complex architectures and applications. Thus, seamless methodologies and tools are required to resolve the SoC design issues. This chapter presents a high level component based approach for expressing system reconfigurability in SoC co-design. A generic model of reactive control is presented for Gaspard2, a SoC co-design framework. Control integration in different levels of the framework is explored along with a comparison of their advantages and disadvantages. Afterwards, control integration at another high abstraction level is investigated which proves to be more beneficial then the other alternatives. This integration allows to integrate reconfigurability features in modern SoCs. Finally a case study is presented for validation purposes. The presented works are based on Model-Driven Engineering (MDE) and UML MARTE profile for modeling and analysis of real-time embedded systems

    FPGA based remote code integrity verification of programs in distributed embedded systems

    Get PDF
    The explosive growth of networked embedded systems has made ubiquitous and pervasive computing a reality. However, there are still a number of new challenges to its widespread adoption that include scalability, availability, and, especially, security of software. Among the different challenges in software security, the problem of remote-code integrity verification is still waiting for efficient solutions. This paper proposes the use of reconfigurable computing to build a consistent architecture for generation of attestations (proofs) of code integrity for an executing program as well as to deliver them to the designated verification entity. Remote dynamic update of reconfigurable devices is also exploited to increase the complexity of mounting attacks in a real-word environment. The proposed solution perfectly fits embedded devices that are nowadays commonly equipped with reconfigurable hardware components that are exploited to solve different computational problems

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Model-based dependability analysis : state-of-the-art, challenges and future outlook

    Get PDF
    Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis

    Modeling and formal verification of probabilistic reconfigurable systems

    Get PDF
    In this thesis, we propose a new approach for formal modeling and verification of adaptive probabilistic systems. Dynamic reconfigurable systems are the trend of all future technological systems, such as flight control systems, vehicle electronic systems, and manufacturing systems. In order to meet user and environmental requirements, such a dynamic reconfigurable system has to actively adjust its configuration at run-time by modifying its components and connections, while changes are detected in the internal/external execution environment. On the other hand, these changes may violate the memory usage, the required energy and the concerned real-time constraints since the behavior of the system is unpredictable. It might also make the system's functions unavailable for some time and make potential harm to human life or large financial investments. Thus, updating a system with any new configuration requires that the post reconfigurable system fully satisfies the related constraints. We introduce GR-TNCES formalism for the optimal functional and temporal specification of probabilistic reconfigurable systems under resource constraints. It enables the optimal specification of a probabilistic, energetic and memory constraints of such a system. To formally verify the correctness and the safety of such a probabilistic system specification, and the non-violation of its properties, an automatic transformation from GR-TNCES models into PRISM models is introduced. Moreover, a new approach XCTL is also proposed to formally verify reconfigurable systems. It enables the formal certification of uncompleted and reconfigurable systems. A new version of the software ZIZO is also proposed to model, simulate and verify such GR-TNCES model. To prove its relevance, the latter was applied to case studies; it was used to model and simulate the behavior of an IPV4 protocol to prevent the energy and memory resources violation. It was also used to optimize energy consumption of an automotive skid conveyor.In dieser Arbeit wird ein neuer Ansatz zur formalen Modellierung und Verifikation dynamisch rekonfigurierbarer Systeme vorgestellt. Dynamische rekonfigurierbare Systeme sind in vielen aktuellen und zukünftigen Anwendungen, wie beispielsweise Flugsteuerungssystemen, Fahrzeugelektronik und Fertigungssysteme zu finden. Diese Systeme weisen ein probabilistisches, adaptives Verhalten auf. Um die Benutzer- und Umgebungsbedingungen kontinuierlich zu erfüllen, muss ein solches System seine Konfiguration zur Laufzeit aktiv anpassen, indem es seine Komponenten, Verbindungen zwischen Komponenten und seine Daten modifiziert (adaptiv), sobald Änderungen in der internen oder externen Ausführungsumgebung erkannt werden (probabilistisch). Diese Anpassungen dürfen Beschränkungen bei der Speichernutzung, der erforderlichen Energie und bestehende Echtzeitbedingungen nicht verletzen. Eine nicht geprüfte Rekonfiguration könnte dazu führen, dass die Funktionen des Systems für einige Zeit nicht verfügbar wären und potenziell menschliches Leben gefährdet würde oder großer finanzieller Schaden entstünde. Somit erfordert das Aktualisieren eines Systems mit einer neuen Konfiguration, dass das rekonfigurierte System die zugehörigen Beschränkungen vollständig einhält. Um dies zu überprüfen, wird in dieser Arbeit der GR-TNCES-Formalismus, eine Erweiterung von Petrinetzen, für die optimale funktionale und zeitliche Spezifikation probabilistischer rekonfigurierbarer Systeme unter Ressourcenbeschränkungen vorgeschlagen. Die entstehenden Modelle sollen über probabilistische model checking verifiziert werden. Dazu eignet sich die etablierte Software PRISM. Um die Verifikation zu ermöglichen wird in dieser Arbeit ein Verfahren zur Transformation von GR-TNCES-Modellen in PRISM-Modelle beschrieben. Eine neu eingeführte Logik (XCTL) erlaubt zudem die einfache Beschreibung der zu prüfenden Eigenschaften. Die genannten Schritte wurden in einer Softwareumgebung für den automatisierten Entwurf, die Simulation und die formale Verifikation (durch eine automatische Transformation nach PRISM) umgesetzt. Eine Fallstudie zeigt die Anwendung des Verfahren

    Characterization of real-time computers

    Get PDF
    A real-time system consists of a computer controller and controlled processes. Despite the synergistic relationship between these two components, they have been traditionally designed and analyzed independently of and separately from each other; namely, computer controllers by computer scientists/engineers and controlled processes by control scientists. As a remedy for this problem, in this report real-time computers are characterized by performance measures based on computer controller response time that are: (1) congruent to the real-time applications, (2) able to offer an objective comparison of rival computer systems, and (3) experimentally measurable/determinable. These measures, unlike others, provide the real-time computer controller with a natural link to controlled processes. In order to demonstrate their utility and power, these measures are first determined for example controlled processes on the basis of control performance functionals. They are then used for two important real-time multiprocessor design applications - the number-power tradeoff and fault-masking and synchronization

    Using embedded hardware monitor cores in critical computer systems

    Get PDF
    The integration of FPGA devices in many different architectures and services makes monitoring and real time detection of errors an important concern in FPGA system design. A monitor is a tool, or a set of tools, that facilitate analytic measurements in observing a given system. The goal of these observations is usually the performance analysis and optimisation, or the surveillance of the system. However, System-on-Chip (SoC) based designs leave few points to attach external tools such as logic analysers. Thus, an embedded error detection core that allows observation of critical system nodes (such as processor cores and buses) should enforce the operation of the FPGA-based system, in order to prevent system failures. The core should not interfere with system performance and must ensure timely detection of errors. This thesis is an investigation onto how a robust hardware-monitoring module can be efficiently integrated in a target PCI board (with FPGA-based application processing features) which is part of a critical computing system. [Continues.
    corecore