13,370 research outputs found

    Finding the direction of disturbance propagation in a chemical process using transfer entropy

    No full text
    Published versio

    Space engine safety system

    Get PDF
    A rocket engine safety system was designed to initiate control procedures to minimize damage to the engine or vehicle or test stand in the event of an engine failure. The features and the implementation issues associated with rocket engine safety systems are discussed, as well as the specific concerns of safety systems applied to a space-based engine and long duration space missions. Examples of safety system features and architectures are given, based on recent safety monitoring investigations conducted for the Space Shuttle Main Engine and for future liquid rocket engines. Also, the general design and implementation process for rocket engine safety systems is presented

    Advanced flight control system study

    Get PDF
    The architecture, requirements, and system elements of an ultrareliable, advanced flight control system are described. The basic criteria are functional reliability of 10 to the minus 10 power/hour of flight and only 6 month scheduled maintenance. A distributed system architecture is described, including a multiplexed communication system, reliable bus controller, the use of skewed sensor arrays, and actuator interfaces. Test bed and flight evaluation program are proposed

    Airborne Advanced Reconfigurable Computer System (ARCS)

    Get PDF
    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility

    Automatic programming methodologies for electronic hardware fault monitoring

    Get PDF
    This paper presents three variants of Genetic Programming (GP) approaches for intelligent online performance monitoring of electronic circuits and systems. Reliability modeling of electronic circuits can be best performed by the Stressor - susceptibility interaction model. A circuit or a system is considered to be failed once the stressor has exceeded the susceptibility limits. For on-line prediction, validated stressor vectors may be obtained by direct measurements or sensors, which after pre-processing and standardization are fed into the GP models. Empirical results are compared with artificial neural networks trained using backpropagation algorithm and classification and regression trees. The performance of the proposed method is evaluated by comparing the experiment results with the actual failure model values. The developed model reveals that GP could play an important role for future fault monitoring systems.This research was supported by the International Joint Research Grant of the IITA (Institute of Information Technology Assessment) foreign professor invitation program of the MIC (Ministry of Information and Communication), Korea

    Framework for a space shuttle main engine health monitoring system

    Get PDF
    A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available

    A cell outage management framework for dense heterogeneous networks

    Get PDF
    In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner

    Periodic Application of Concurrent Error Detection in Processor Array Architectures

    Get PDF
    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance

    Quantitative evaluation of structural compartmentalization in the Heidrun field using time-lapse seismic data

    Get PDF
    In reservoir settings with structural compartmentalization, fault properties can constrain the fluid flow and pressure development, thus affecting decisions associated with the selection of the drainage strategy within reservoir management activities. Historically, we have relied on geological analysis to evaluate the fault seal, however this can be restricted by available well coverage which can introduce considerable uncertainty. More recently, time-lapse seismic has become useful in the assessment of the dynamic connectivity. Indeed, seismic changes are in general a combination of pressure and saturation changes which, for compartmentalized reservoirs, seem to be associated with the sealing behaviour of faults. Based on this observation, this thesis presents a new effort in which the spatial coverage of the time-lapse seismic data is used as an advantage to more fully resolve properties of the fault seal, particularly in areas with poor data control. To achieve this task, statistics of amplitude contrast and the spatial variability of the 4D seismic signatures are considered. Tests performed on modelled data have revealed that the proposed 4D seismic measurements can be calibrated at the wells in a sector with known geological characteristics via a quadratic polynomial expression that allows fault permeability to be derived. Uncertainties in the 4D seismic estimation have also been considered in a Bayesian framework, leading to the identification of error bounds for the estimates. Results on synthetic data are encouraging enough to investigate its applicability on the Heidrun field. In this real example, the Jurassic reservoirs are compartmentalized due to the presence of a set of faults for which their flow capacity strongly affects field depletion. Here, previous studies have attempted to characterize the fault seals, yet the sparse nature of well data has limited their evaluation, leaving uncertainties when adjusting fault properties in the reservoir simulation model. In this case, application of our approach has proven useful, as it has allowed the detailed characterization of major faults in this field. Predictions obtained with the 4D seismic appear consistent when compared to previous core observations made from fault-rocks studies. Also, the results have been used to update ii the flow simulation model by adjusting transmissibility factors between compartments, leading to a decrease of the mismatch between the simulated forecast and historical production data. Furthermore, uncertainty in the 4D seismic prediction has been considered when implementing an automatic history match workflow allowing further improvements. New insights into the implications of the dynamic fault behaviour in the time-lapse seismic response are also provided in this thesis. We make use of synthetic models in which faults represent the main constraint for fluid flow, to show that an adjustment of the relation between the reservoir capillary pressure and the capillary threshold pressure of the fault-rock can alter the variance of the time-lapse seismic signature. However, a similar behaviour can be obtained when strong variations in the transmissibility of the fault are present. As a consequence, we propose that this statistic might help to identify fault seal dependent controls on individual fluid phases when the transmissibilities are fairly similar along the fault segment. This is particularly useful in the Heidrun field where we have found it difficult to explain the water encroachment by only using the single-phase approximation offered by the fault transmissibility multipliers. Here, the variance of the 4D seismic signature is employed together with the fault permeability values to suggest that in some compartments, waterflooding might be affected by the presence of a specific fault with sealing capacity strongly dependent on the individual fluid phases. This helps to explain the observed fluid uncertainties. It is also recognized that more data might be required to gain greater insight into this issue; hence alternative hypotheses are not discarded
    corecore