32,296 research outputs found

    Improving reconfigurable systems reliability by combining periodical test and redundancy techniques: a case study

    Get PDF
    This paper revises and introduces to the field of reconfigurable computer systems, some traditional techniques used in the fields of fault-tolerance and testing of digital circuits. The target area is that of on-board spacecraft electronics, as this class of application is a good candidate for the use of reconfigurable computing technology. Fault tolerant strategies are used in order for the system to adapt itself to the severe conditions found in space. In addition, the paper describes some problems and possible solutions for the use of reconfigurable components, based on programmable logic, in space applications

    On-Line Dependability Enhancement of Multiprocessor SoCs by Resource Management

    Get PDF
    This paper describes a new approach towards dependable design of homogeneous multi-processor SoCs in an example satellite-navigation application. First, the NoC dependability is functionally verified via embedded software. Then the Xentium processor tiles are periodically verified via on-line self-testing techniques, by using a new IIP Dependability Manager. Based on the Dependability Manager results, faulty tiles are electronically excluded and replaced by fault-free spare tiles via on-line resource management. This integrated approach enables fast electronic fault detection/diagnosis and repair, and hence a high system availability. The dependability application runs in parallel with the actual application, resulting in a very dependable system. All parts have been verified by simulation

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    A controlled experiment for the empirical evaluation of safety analysis techniques for safety-critical software

    Get PDF
    Context: Today's safety critical systems are increasingly reliant on software. Software becomes responsible for most of the critical functions of systems. Many different safety analysis techniques have been developed to identify hazards of systems. FTA and FMEA are most commonly used by safety analysts. Recently, STPA has been proposed with the goal to better cope with complex systems including software. Objective: This research aimed at comparing quantitatively these three safety analysis techniques with regard to their effectiveness, applicability, understandability, ease of use and efficiency in identifying software safety requirements at the system level. Method: We conducted a controlled experiment with 21 master and bachelor students applying these three techniques to three safety-critical systems: train door control, anti-lock braking and traffic collision and avoidance. Results: The results showed that there is no statistically significant difference between these techniques in terms of applicability, understandability and ease of use, but a significant difference in terms of effectiveness and efficiency is obtained. Conclusion: We conclude that STPA seems to be an effective method to identify software safety requirements at the system level. In particular, STPA addresses more different software safety requirements than the traditional techniques FTA and FMEA, but STPA needs more time to carry out by safety analysts with little or no prior experience.Comment: 10 pages, 1 figure in Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering (EASE '15). ACM, 201

    Experimental evaluation of two software countermeasures against fault attacks

    Get PDF
    Injection of transient faults can be used as a way to attack embedded systems. On embedded processors such as microcontrollers, several studies showed that such a transient fault injection with glitches or electromagnetic pulses could corrupt either the data loads from the memory or the assembly instructions executed by the circuit. Some countermeasure schemes which rely on temporal redundancy have been proposed to handle this issue. Among them, several schemes add this redundancy at assembly instruction level. In this paper, we perform a practical evaluation for two of those countermeasure schemes by using a pulsed electromagnetic fault injection process on a 32-bit microcontroller. We provide some necessary conditions for an efficient implementation of those countermeasure schemes in practice. We also evaluate their efficiency and highlight their limitations. To the best of our knowledge, no experimental evaluation of the security of such instruction-level countermeasure schemes has been published yet.Comment: 6 pages, 2014 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), Arlington : United States (2014

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    International White Book on DER Protection : Review and Testing Procedures

    Get PDF
    This white book provides an insight into the issues surrounding the impact of increasing levels of DER on the generator and network protection and the resulting necessary improvements in protection testing practices. Particular focus is placed on ever increasing inverter-interfaced DER installations and the challenges of utility network integration. This white book should also serve as a starting point for specifying DER protection testing requirements and procedures. A comprehensive review of international DER protection practices, standards and recommendations is presented. This is accompanied by the identiïŹ cation of the main performance challenges related to these protection schemes under varied network operational conditions and the nature of DER generator and interface technologies. Emphasis is placed on the importance of dynamic testing that can only be delivered through laboratory-based platforms such as real-time simulators, integrated substation automation infrastructure and ïŹ‚ exible, inverter-equipped testing microgrids. To this end, the combination of ïŹ‚ exible network operation and new DER technologies underlines the importance of utilising the laboratory testing facilities available within the DERlab Network of Excellence. This not only informs the shaping of new protection testing and network integration practices by end users but also enables the process of de-risking new DER protection technologies. In order to support the issues discussed in the white paper, a comparative case study between UK and German DER protection and scheme testing practices is presented. This also highlights the level of complexity associated with standardisation and approval mechanisms adopted by different countries

    Functional Testing Approaches for "BIFST-able" tlm_fifo

    Get PDF
    Evolution of Electronic System Level design methodologies, allows a wider use of Transaction-Level Modeling (TLM). TLM is a high-level approach to modeling digital systems that emphasizes on separating communications among modules from the details of functional units. This paper explores different functional testing approaches for the implementation of Built-in Functional Self Test facilities in the TLM primitive channel tlm_fifo. In particular, it focuses on three different test approaches based on a finite state machine model of tlm_fifo, functional fault models, and march tests respectivel
    • 

    corecore