141 research outputs found

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Automating the IEEE std. 1500 compliance verification for embedded cores

    Get PDF
    The IEEE 1500 standard for embedded core testing proposes a very effective solution for testing modern system-on-chip (SoC). It proposes a flexible hardware test wrapper architecture, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Already several IP providers have announced compliance in both existing and future design blocks. In this paper we address the challenge of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE std. 1500. This is a mandatory step to fully trust the wrapper functionalities in applying the test sequences to the core. The proposed solution aims at implementing a verification framework allowing core providers and/or integrators to automatically verify the compliancy of their products (sold or purchased) to the standar

    Formal Verification and Validation of AADL Models

    Get PDF
    International audienceSafety-critical systems are increasingly difficult to com- prehend due to their rising complexity. Methodologies, tools and modeling formalisms have been developed to overcome this. Component-based design is an im- portant paradigm that is shared by many of them

    Selecting a suitable system architecture for testing and integration

    Get PDF
    A system architecture is selected in the early design phases of a product. A trade-off is made between the most important architectural views during this selection process. The required system functionality is realized in an architecture which is maintainable, extendible, manufacturable, testable and integratable. This work investigates how an architecture can be selected, such that it is testable and integratable. The elements of an architecture which is suitable for testing and integration are introduced first. These elements are: components, interfaces between components and a layering. The division of the system into components determines how the system can be integrated and how many integration steps are required. Next to that, not all components need to be selected for system level integration and testing. Some, low-risk, components are integrated and tested on a lower level or not tested at all. The selection of components to be considered for integration and testing also influences which interfaces are considered. The selection of an interface infrastructure influences integration and testing, next to the interfaces which result from component and interface selection. The interface infrastructure can reduce or increase the number of interfaces in the system. An interface infrastructure could also introduce that specific connectors need to be developed resulting in additional risk and more required testing. And finally, a layering defines how the system, consisting of components and interfaces, is clustered. This layering reduces the complexity of the system and therefore the complexity of the integration and test plan. The layering for integration and testing can be defined fairly late in the development process just before integration and testing begins. Next to that, the layering for integration and testing can be different than the normal organizational or functional layerings of a system. More layerings can be defined and used next to each other. Guidelines and examples of suitable selections of components, interface infrastructure and layerings will be given in the presentation

    Federated Embedded Systems – a review of the literature in related fields

    Get PDF
    This report is concerned with the vision of smart interconnected objects, a vision that has attracted much attention lately. In this paper, embedded, interconnected, open, and heterogeneous control systems are in focus, formally referred to as Federated Embedded Systems. To place FES into a context, a review of some related research directions is presented. This review includes such concepts as systems of systems, cyber-physical systems, ubiquitous computing, internet of things, and multi-agent systems. Interestingly, the reviewed fields seem to overlap with each other in an increasing number of ways

    Smart construction companies using internet of things technologies

    Get PDF
    The digital world is enriched due to the increase in the number of things which are rapidly connecting to the Internet. The Internet of Things (IoT) facilitates and improves the work efficiency and human life in various fields. IoT was adopted extensively to male buildings more effective and extra smart. For example, buildings are consuming a considerable energy amount. In buildings, there is a critical requirement for energy efficiency, whereas one of the smart building’s aims is monitoring, reducing and managing the energy consumption of buildings without compromising the operational efficiency and the comfort of occupants. The systems of Heating, Ventilation and Air Conditioning (HVAC) are contributing to considerable consumption of energy in buildings. Also, plug loads and lighting are consuming a lot energy. Thus, smart buildings have the ability of using many IoT sensor types in HVAC along with other mechanical systems making such more adaptive and intelligent. The embedded sensors as well as their related controllers which are mounted in smart buildings are generating a huge amount of data (big data), such data might be subjected to extraction, filtration ana analyzation and utilized for the analytics of smart buildings. For example, the big data analytics might be utilized for analyzing and improving the energy efficiency in addition to the residents’ overall user experience in building. It has been verified that there is an increased focus on smart buildings and big data analytics and management. Yet, there is a requirement for identifying the problems and solutions for overcoming them in such field. With the use of a design research method and model driven architecture, this study aims to develop such system.The major aim of this work is introducing a technique with increased possibility for moving Intelligent Buildings (IBs) towards next-generation model. It depends on IoT adapted to IB for integrating smart re-configurable subsystems and components of IB into Enterprise Network Integrated Building Systems (ENIBSs), also, if possible, into ENIBS’ global networks. The study is presented in the following way. Section 2 is providing an overview of IoT, it is indicating that IoT is relatively new and no associated contribution on using the IoT on IBs or, on the ENIBSs, were indicated in such regard. Section3 is presenting the methodological model that has been used to design a generic model for the IoT with the applicability in the IBs as well as generic architectures for re-configurable smart plug-and-play control systems for quick configuration and integration regarding smart components of the IB. Section 4 provides the theory’ experimental test. The study ends up with the conclusions and some suggestions for the future work

    Metamodeling Techniques Applied to the Design of Reconfigurable Control Applications

    Get PDF
    In order to realize autonomous manufacturing systems in environments characterized by high dynamics and high complexity of task, it is necessary to improve the control system modelling and performance. This requires the use of better and reusable abstractions. In this paper, we explore the metamodel techniques as a foundation to the solution of this problem. The increasing popularity of model-driven approaches and a new generation of tools to support metamodel techniques are changing software engineering landscape, boosting the adoption of new methodologies for control application development

    Design Disjunction for Resilient Reconfigurable Hardware

    Get PDF
    Contemporary reconfigurable hardware devices have the capability to achieve high performance, power efficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supporting efficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key role in designing future dependable hardware architectures. However, the exponential increase in density and complexity of current commercial SRAM-based field-programmable gate arrays (FPGAs) has escalated the overhead associated with dynamic runtime design adaptation. Traditionally, static modular redundancy techniques are considered to surmount this limitation; however, they can incur substantial overheads in both area and power requirements. To achieve a better trade-off among performance, area, power, and reliability, this research proposes design-time approaches that enable fine selection of redundancy level based on target reliability goals and autonomous adaptation to runtime demands. To achieve this goal, three studies were conducted: First, a graph and set theoretic approach, named Hypergraph-Cover Diversity (HCD), is introduced as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-free hypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets of resources, each of which can be utilized by the same synthesized application netlist. The diverse implementations provide reconfiguration-based resilience throughout the system lifetime while avoiding the significant overheads associated with runtime placement and routing phases. Evaluation on a Motion-JPEG image compression core using a Xilinx 7-series-based FPGA hardware platform has demonstrated the potential of the proposed FT method to achieve 37.5% area saving and up to 66% reduction in power consumption compared to the frequently-used TMR scheme while providing superior fault tolerance. Second, Design Disjunction based on non-adaptive group testing is developed to realize a low-overhead fault tolerant system capable of handling self-testing and self-recovery using runtime partial reconfiguration. Reconfiguration is guided by resource grouping procedures which employ non-linear measurements given by the constructive property of f-disjunctness to extend runtime resilience to a large fault space and realize a favorable range of tradeoffs. Disjunct designs are created using the mosaic convergence algorithm developed such that at least one configuration in the library evades any occurrence of up to d resource faults, where d is lower-bounded by f. Experimental results for a set of MCNC and ISCAS benchmarks have demonstrated f-diagnosability at the individual slice level with average isolation resolution of 96.4% (94.4%) for f=1 (f=2) while incurring an average critical path delay impact of only 1.49% and area cost roughly comparable to conventional 2-MR approaches. Finally, the proposed Design Disjunction method is evaluated as a design-time method to improve timing yield in the presence of large random within-die (WID) process variations for application with a moderately high production capacity
    corecore