8,966 research outputs found

    Intelligent fault management for the Space Station active thermal control system

    Get PDF
    The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available

    Simulation-based Fault Injection with QEMU for Speeding-up Dependability Analysis of Embedded Software

    Get PDF
    Simulation-based fault injection (SFI) represents a valuable solu- tion for early analysis of software dependability and fault tolerance properties before the physical prototype of the target platform is available. Some SFI approaches base the fault injection strategy on cycle-accurate models imple- mented by means of Hardware Description Languages (HDLs). However, cycle- accurate simulation has revealed to be too time-consuming when the objective is to emulate the effect of soft errors on complex microprocessors. To overcome this issue, SFI solutions based on virtual prototypes of the target platform has started to be proposed. However, current approaches still present some draw- backs, like, for example, they work only for specific CPU architectures, or they require code instrumentation, or they have a different target (i.e., design errors instead of dependability analysis). To address these disadvantages, this paper presents an efficient fault injection approach based on QEMU, one of the most efficient and popular instruction-accurate emulator for several microprocessor architectures. As main goal, the proposed approach represents a non intrusive technique for simulating hardware faults affecting CPU behaviours. Perma- nent and transient/intermittent hardware fault models have been abstracted without losing quality for software dependability analysis. The approach mini- mizes the impact of the fault injection procedure in the emulator performance by preserving the original dynamic binary translation mechanism of QEMU. Experimental results for both x86 and ARM processors proving the efficiency and effectiveness of the proposed approach are presented

    A simulation and diagnosis system incorporating various time delay models and functional elements

    Get PDF
    The application of digital simulation to all phases of digital network design is considered here as oppossed [sic] to development of simulation for one or two restricted parts of the digital process. For this reason a simulator is presented which can be consistent by varying the level of expression from the simulation of architectural structures to such detailed simulation requirements as race analysis of asynchronous sequential circuits. In order to make system simulation more than just an idea, it must be capable of handling large circuits in reasonable times. It is demonstrated that functional simulation has the potential to increase simulation speed while reducing the required storage. This potential is realized with the following features of this simulator structure: 1) a modular structure for specification and execution, 2) the capability of being easily interfaced with gate level simulation, 3) the capability of utilizing the highest level of expression for simulation, 4) a variable level of expression, 5) a relatively unrestricted type of logic that can be simulated, 6) the capabilities of using standard functional modules, 7) a fairly universal means of expressing functional modules and, 8) the use of data and control signals to further force selective trace capabilities on a module level. Greater gate level simulation capabilities are obtained by extending the basic simulator to perform the simulation of undefined signal values and the simulation of ambiguities in signal propagation speeds. The simulator presented here is part of a Test Generation and Simulation System. This system includes preprocessing, combinational test generation, automatic fault insertion as well as simulation --Abstract, page ii

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    Evolution of Test Programs Exploiting a FSM Processor Model

    Get PDF
    Microprocessor testing is becoming a challenging task, due to the increasing complexity of modern architectures. Nowadays, most architectures are tackled with a combination of scan chains and Software-Based Self-Test (SBST) methodologies. Among SBST techniques, evolutionary feedback-based ones prove effective in microprocessor testing: their main disadvantage, however, is the considerable time required to generate suitable test programs. A novel evolutionary-based approach, able to appreciably reduce the generation time, is presented. The proposed method exploits a high-level representation of the architecture under test and a dynamically built Finite State Machine (FSM) model to assess fault coverage without resorting to time-expensive simulations on low-level models. Experimental results, performed on an OpenRISC processor, show that the resulting test obtains a nearly complete fault coverage against the targeted fault mode

    Design of an integrated airframe/propulsion control system architecture

    Get PDF
    The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture

    Power system fault analysis based on intelligent techniques and intelligent electronic device data

    Get PDF
    This dissertation has focused on automated power system fault analysis. New contributions to fault section estimation, protection system performance evaluation and power system/protection system interactive simulation have been achieved. Intelligent techniques including expert systems, fuzzy logic and Petri-nets, as well as data from remote terminal units (RTUs) of supervisory control and data acquisition (SCADA) systems, and digital protective relays have been explored and utilized to fufill the objectives. The task of fault section estimation is difficult when multiple faults, failures of protection devices, and false data are involved. A Fuzzy Reasoning Petri-nets approach has been proposed to tackle the complexities. In this approach, the fuzzy reasoning starting from protection system status data and ending with estimation of faulted power system section is formulated by Petri-nets. The reasoning process is implemented by matrix operations. Data from RTUs of SCADA systems and digital protective relays are used as inputs. Experiential tests have shown that the proposed approach is able to perform accurate fault section estimation under complex scenarios. The evaluation of protection system performance involves issues of data acquisition, prediction of expected operations, identification of unexpected operations and diagnosis of the reasons for unexpected operations. An automated protection system performance evaluation application has been developed to accomplish all the tasks. The application automatically retrieves relay files, processes relay file data, and performs rule-based analysis. Forward chaining reasoning is used for prediction of expected protection operation while backward chaining reasoning is used for diagnosis of unexpected protection operations. Lab tests have shown that the developed application has successfully performed relay performance analysis. The challenge of power system/protection system interactive simulation lies in modeling of sophisticated protection systems and interfacing the protection system model and power system network model seamlessly. An approach which utilizes the "compiled foreign model" mechanism of ATP MODELS language is proposed to model multifunctional digital protective relays in C++ language and seamlessly interface them to the power system network model. The developed simulation environment has been successfully used for the studies of fault section estimation and protection system performance evaluation

    Flight deck engine advisor

    Get PDF
    The focus of this project is on alerting pilots to impending events in such a way as to provide the additional time required for the crew to make critical decisions concerning non-normal operations. The project addresses pilots' need for support in diagnosis and trend monitoring of faults as they affect decisions that must be made within the context of the current flight. Monitoring and diagnostic modules developed under the NASA Faultfinder program were restructured and enhanced using input data from an engine model and real engine fault data. Fault scenarios were prepared to support knowledge base development activities on the MONITAUR and DRAPhyS modules of Faultfinder. An analysis of the information requirements for fault management was included in each scenario. A conceptual framework was developed for systematic evaluation of the impact of context variables on pilot action alternatives as a function of event/fault combinations

    Upper limit on damage zone thickness controlled by seismogenic depth

    Get PDF
    The thickness of fault damage zones, a characteristic length of the cross‐fault distribution of secondary fractures, significantly affects fault stress, earthquake rupture, ground motions, and crustal fluid transport. Field observations indicate that damage zone thickness scales with accumulated fault displacement at short displacements but saturates at a few hundred meters for displacements larger than a few kilometers. To explain this transition of scaling behavior, we conduct 3D numerical simulations of dynamic rupture with off‐fault inelastic deformation on long strike‐slip faults. We find that the distribution of coseismic inelastic strain is controlled by the transition from crack‐like to pulse‐like rupture propagation associated with saturation of the seismogenic depth. The yielding zone reaches its maximum thickness when the rupture becomes a stable pulse‐like rupture. Considering fracture mechanics theory, we show that seismogenic depth controls the upper bound of damage zone thickness on mature faults by limiting the efficiency of stress concentration near earthquake rupture fronts. We obtain a quantitative relation between limiting damage zone thickness, background stress, dynamic fault strength, off‐fault yield strength, and seismogenic depth, which agrees with first‐order field observations. Our results help link dynamic rupture processes with field observations and contribute to a fundamental understanding of damage zone properties
    corecore