17 research outputs found

    Early Dependability Analysis of FPGA-Based Space Applications Using Formal Verification

    Get PDF
    SRAM-based FPGAs are increasingly attractive in the aerospace industry for their field programmability and low cost. Unfortunately, they suffer from cosmic radiation induced Single Event Effects (SEEs). In safety-critical applications, the dependability of the design is a prime concern since failures may have catastrophic consequences. Hence, an early analysis of dependability of such safety-critical applications will enable designers to develop systems that meet high dependability requirements, such as the DO-254 standard. In this thesis, we propose a high-level dependability and performability analysis methodology based on probabilistic model checking. Compared to the pen-and-pencil and discrete-event simulation approach, our methodology is more accurate due to the use of an automated formal verification technique. Moreover, compared to fault injection or beam testing, analysis at early design stages can guide designers to build more reliable designs reducing the overall cost and effort. The proposed methodology can perform three different types of analysis: evaluation of available design options, optimization of scrub intervals while satisfying its design assurance level requirements, and optimal partitioning of Triple-Modular Redundant (TMR) Systems. Such analysis can also guide designers to adopt proper mitigation technique(s), such as rescheduling, TMR, TMR with less frequent scrubs, or even can help to decide the number of TMR partitions for a given scrub intervals. Starting from a high-level description of a system, based on the preferred analysis, a Markov model or Markov (reward) model is constructed from the extracted Control Data Flow Graph (CDFG) and the failure/mitigation parameters for the targeted FPGA. Such modeling and exhaustive analysis elaborated using a probabilistic model checking technique can capture all the failures and repairs possible (according to some general model) in the system within the radiation environment. To illustrate the applicability of the proposed approach, we present our quantitative analysis obtained from DSP benchmark circuits

    Reconfiguration for Fault Tolerance and Performance Analysis

    Get PDF
    Architecture reconfiguration, the ability of a system to alter the active interconnection among modules, has a history of different purposes and strategies. Its purposes develop from the relatively simple desire to formalize procedures that all processes have in common to reconfiguration for the improvement of fault-tolerance, to reconfiguration for performance enhancement, either through the simple maximizing of system use or by sophisticated notions of wedding topology to the specific needs of a given process. Strategies range from straightforward redundancy by means of an identical backup system to intricate structures employing multistage interconnection networks. The present discussion surveys the more important contributions to developments in reconfigurable architecture. The strategy here is in a sense to approach the field from an historical perspective, with the goal of developing a more coherent theory of reconfiguration. First, the Turing and von Neumann machines are discussed from the perspective of system reconfiguration, and it is seen that this early important theoretical work contains little that anticipates reconfiguration. Then some early developments in reconfiguration are analyzed, including the work of Estrin and associates on the fixed plus variable restructurable computer system, the attempt to theorize about configurable computers by Miller and Cocke, and the work of Reddi and Feustel on their restructable computer system. The discussion then focuses on the most sustained systems for fault tolerance and performance enhancement that have been proposed. An attempt will be made to define fault tolerance and to investigate some of the strategies used to achieve it. By investigating four different systems, the Tandern computer, the C.vmp system, the Extra Stage Cube, and the Gamma network, the move from dynamic redundancy to reconfiguration is observed. Then reconfiguration for performance enhancement is discussed. A survey of some proposals is attempted, then the discussion focuses on the most sustained systems that have been proposed: PASM, the DC architecture, the Star local network, and the NYU Ultracomputer. The discussion is organized around a comparison of control, scheduling, communication, and network topology. Finally, comparisons are drawn between fault tolerance and performance enhancement, in order to clarify the notion of reconfiguration and to reveal the common ground of fault tolerance and performance enhancement as well as the areas in which they diverge. An attempt is made in the conclusion to derive from this survey and analysis some observations on the nature of reconfiguration, as well as some remarks on necessary further areas of research

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Design and control of components-based integrated servo pneumatic drives

    Get PDF
    On-off traditional pneumatic drives are most widely used in industry offering low-cost, simple but flexible mechanical operation and relatively high power to weight ratio. For a period of decade from mid 1980's to 1990's, some initiatives were made to develop servo pneumatic drives for most sophisticated applications, employing purpose-designed control valves for pneumatic drives and low friction cylinders. However, it is found that the high cost and complex installation have discouraged the manufacturer from applying servo pneumatic drives to industrial usage, making them less favourable in comparison to their electric counterpart. This research aims to develop low-cost servo pneumatic drives which are capable of point-to-point positioning tasks, suitable for applications requiring intermediate performance characteristics. In achieving this objective, a strategy that involves the use of traditional on-off valve, simple control algorithm and distributed field-bus control networks has been adopted, namely, the design and control of Components-based Integrated Pneumatic Drives (CIPDs). Firstly, a new pneumatic actuator servo motion control strategy has been developed. With the new motion control strategy, the processes of positioning a payload can be achieved by opening the control valve only once. Hence, lowspeed on-off pneumatic control valves can be employed in keeping the cost low, a key attraction for employing pneumatic drives. The new servo motion control strategy also provides a way of controlling the load motion speed mechanically. Meanwhile, a new PD-based three-state closed-loop control algorithm also has been developed for the new control scheme. This control algorithm provides a way of adapting traditional PID (Proportional Integral Derivative) control theories for regulating pneumatic drives. Moreover, a deceleration control strategy has been developed so that both high-speed and accurate positioning control can be realised with low cost pneumatic drives. Secondly, the effects of system parameters on the transient response are studied. In assisting the analysis, a second order model is developed to encapsulate the velocity response characteristics of pneumatic drives to a step input signal. Stability analyses for both open loop and closed-loop control have also been carried out for the CIPDs with the newly developed motion control strategy. Thirdly, a distributed control strategy employing Lon Works has been devised and implemented, offering desirable attributes, high re-configurability, low cost and easy in installation and maintenance, etc to keep with the traditional strength for using pneumatic drives. By applying this technology, the CIPDs become standard components in "real" and "virtual" design environments. A remote service strategy for CIPDs using TCP/IP communication protocol has also been developed. Subsequently a range of experimental verifications has been carried out in the research. The experimental study of high-speed motion control indicates that the deceleration control strategy developed in the research can be an effective method in improving the behaviour of high speed CIPDs. The verification of open loop system behaviour of CIPDs shows that the model derived is largely indicative of the likely behaviour for the system considered, and the steady state velocity can be estimated using the Velocity Gain Kv. The evaluation made on a pneumatically driven pick-and-place machine has also confirmed that the system setup, including wiring, tuning, and system reconfiguration can be achieved in relative ease. This pilot study reveals the potential for employing CIPDs in building highly flexible cost effective manufacturing machines. It can thus be concluded that this research has developed successfully a new dimension and knowledge in both theoretical and practical terms in building low-cost servo pneumatic drives, which are capable of point-to-point positioning through employing traditional on-off pneumatic valves and actuators and through their integration with distributed control technology (LonWorks) by adopting a component-based design paradigm

    Modelling and performability evaluation of Wireless Sensor Networks

    Get PDF
    This thesis presents generic analytical models of homogeneous clustered Wireless Sensor Networks (WSNs) with a centrally located Cluster Head (CH) coordinating cluster communication with the sink directly or through other intermediate nodes. The focus is to integrate performance and availability studies of WSNs in the presence of sensor nodes and channel failures and repair/replacement. The main purpose is to enhance improvement of WSN Quality of Service (QoS). Other research works also considered in this thesis include modelling of packet arrival distribution at the CH and intermediate nodes, and modelling of energy consumption at the sensor nodes. An investigation and critical analysis of wireless sensor network architectures, energy conservation techniques and QoS requirements are performed in order to improve performance and availability of the network. Existing techniques used for performance evaluation of single and multi-server systems with several operative states are investigated and analysed in details. To begin with, existing approaches for independent (pure) performance modelling are critically analysed with highlights on merits and drawbacks. Similarly, pure availability modelling approaches are also analysed. Considering that pure performance models tend to be too optimistic and pure availability models are too conservative, performability, which is the integration of performance and availability studies is used for the evaluation of the WSN models developed in this study. Two-dimensional Markov state space representations of the systems are used for performability modelling. Following critical analysis of the existing solution techniques, spectral expansion method and system of simultaneous linear equations are developed and used to solving the proposed models. To validate the results obtained with the two techniques, a discrete event simulation tool is explored. In this research, open queuing networks are used to model the behaviour of the CH when subjected to streams of traffic from cluster nodes in addition to dynamics of operating in the various states. The research begins with a model of a CH with an infinite queue capacity subject to failures and repair/replacement. The model is developed progressively to consider bounded queue capacity systems, channel failures and sleep scheduling mechanisms for performability evaluation of WSNs. Using the developed models, various performance measures of the considered system including mean queue length, throughput, response time and blocking probability are evaluated. Finally, energy models considering mean power consumption in each of the possible operative states is developed. The resulting models are in turn employed for the evaluation of energy saving for the proposed case study model. Numerical solutions and discussions are presented for all the queuing models developed. Simulation is also performed in order to validate the accuracy of the results obtained. In order to address issues of performance and availability of WSNs, current research present independent performance and availability studies. The concerns resulting from such studies have therefore remained unresolved over the years hence persistence poor system performance. The novelty of this research is a proposed integrated performance and availability modelling approach for WSNs meant to address challenges of independent studies. In addition, a novel methodology for modelling and evaluation of power consumption is also offered. Proposed model results provide remarkable improvement on system performance and availability in addition to providing tools for further optimisation studies. A significant power saving is also observed from the proposed model results. In order to improve QoS for WSN, it is possible to improve the proposed models by incorporating priority queuing in a mixed traffic environment. A model of multi-server system is also appropriate for addressing traffic routing. It is also possible to extend the proposed energy model to consider other sleep scheduling mechanisms other than On-demand proposed herein. Analysis and classification of possible arrival distribution of WSN packets for various application environments would be a great idea for enabling robust scientific research

    Test analysis & fault simulation of microfluidic systems

    Get PDF
    This work presents a design, simulation and test methodology for microfluidic systems, with particular focus on simulation for test. A Microfluidic Fault Simulator (MFS) has been created based around COMSOL which allows a fault-free system model to undergo fault injection and provide test measurements. A post MFS test analysis procedure is also described.A range of fault-free system simulations have been cross-validated to experimental work to gauge the accuracy of the fundamental simulation approach prior to further investigation and development of the simulation and test procedure.A generic mechanism, termed a fault block, has been developed to provide fault injection and a method of describing a low abstraction behavioural fault model within the system. This technique has allowed the creation of a fault library containing a range of different microfluidic fault conditions. Each of the fault models has been cross-validated to experimental conditions or published results to determine their accuracy.Two test methods, namely, impedance spectroscopy and Levich electro-chemical sensors have been investigated as general methods of microfluidic test, each of which has been shown to be sensitive to a multitude of fault. Each method has successfully been implemented within the simulation environment and each cross-validated by first-hand experimentation or published work.A test analysis procedure based around the Neyman-Pearson criterion has been developed to allow a probabilistic metric for each test applied for a given fault condition, providing a quantitive assessment of each test. These metrics are used to analyse the sensitivity of each test method, useful when determining which tests to employ in the final system. Furthermore, these probabilistic metrics may be combined to provide a fault coverage metric for the complete system.The complete MFS method has been applied to two system cases studies; a hydrodynamic “Y” channel and a flow cytometry system for prognosing head and neck cancer.Decision trees are trained based on the test measurement data and fault conditions as a means of classifying the systems fault condition state. The classification rules created by the decision trees may be displayed graphically or as a set of rules which can be loaded into test instrumentation. During the course of this research a high voltage power supply instrument has been developed to aid electro-osmotic experimentation and an impedance spectrometer to provide embedded test

    Design for manufacturability : a feature-based agent-driven approach

    Get PDF

    A computing structure for data acquisition in high energy physics

    Get PDF
    A review of the development of parallel computing ispresented, followed by a summary of currently recognised typesof parallel computer and a brief summary of some applicationsof parallel computing in the field of high energy physics.The computing requirement at the data acquisition stageof a particular set of high energy physics experiments isdetailed, with reference to the computing system currently inuse. The requirement for a parallel processor to process thedata from these experiments is established and a possiblecomputing structure put forward.The topology proposed consists of a set of rings ofprocessors stacked to give a cylindrical arrangement, ananalytical approach is used to verify the suitability andextensibility of the suggested scheme. Using simulationresults the behaviour of rings and cylinders of processorsusing different algorithms for the movement of data within thesystem and different patterns of data input is presented anddiscussed.Practical hardware and software details for processingequipment capable of supporting such a structure as presentedhere is given, various algorithms for use with this equipment,e. g. program distribution, are developed and the software forthe implementation of the cylindrical structure is presented.Appendices of constructional information and all programlistings are included
    corecore