2,384 research outputs found

    Reliability Analysis of Correlated Competitive and Dependent Components Considering Random Isolation Times

    Get PDF
    Funding Information: Funding Statement: This work was supported by the National Natural Science Foundation of China (NSFC) (Grant No. 62172058) and the Hunan Provincial Natural Science Foundation of China (Grant Nos. 2022JJ10052, 2022JJ30624). Publisher Copyright: Ā© 2023 Tech Science Press. All rights reserved.Peer reviewedPublisher PD

    Reliability Analysis of Complex Systems with Failure Propagation

    Get PDF
    Failure propagation is a critical factor for the reliability and safety of complex systems. To recognise and identify failure propagation of systems, a deep fusion model based on deep belief network (DBN) and Bayesian structural equation model (BSEM) is proposed. The deep belief network is applied to extract features between status monitoring data and the performance degradation in different failure components. To calculate the path weight of failure propagation, the Bayesian structural equation model is proposed to study the relationship among different fault modes. After getting the performance degradation of each fault through DBN and calculating the path weight of fault propagation by BSEM, it is available to get the overall reliability of the system. The aircraft landing gear system with 19 fault patterns is selected to evaluate the feasibility of the proposed deep fusion model. The results demonstrate that the overall reliability of the system can be obtained by analysing the fault propagation of multiple fault patterns, and the proposed model has a lower deviation than traditional back propagation neural network

    A Condition-Based Maintenance Model for Assets with Accelerated Deterioration Due to Fault Propagation

    Get PDF
    Complex industrial assets such as power transformers are subject to accelerated deterioration when one of its constituent component malfunctions, affecting the condition of other components, which is a phenomenon called fault propagation. In this paper, we present a novel approach for optimizing condition-based maintenance policies for such assets by modelling their deterioration as a multiple dependent deterioration path process. The aim of the policy is to replace the malfunctioned component and mitigate accelerated deterioration at minimal impact to the business. The maintenance model provides guidance on determining inspection and maintenance strategies to optimize asset availability and operational cost.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/TR.2015.243913

    Sustainable Fault-handling Of Reconfigurable Logic Using Throughput-driven Assessment

    Get PDF
    A sustainable Evolvable Hardware (EH) system is developed for SRAM-based reconfigurable Field Programmable Gate Arrays (FPGAs) using outlier detection and group testing-based assessment principles. The fault diagnosis methods presented herein leverage throughput-driven, relative fitness assessment to maintain resource viability autonomously. Group testing-based techniques are developed for adaptive input-driven fault isolation in FPGAs, without the need for exhaustive testing or coding-based evaluation. The techniques maintain the device operational, and when possible generate validated outputs throughout the repair process. Adaptive fault isolation methods based on discrepancy-enabled pair-wise comparisons are developed. By observing the discrepancy characteristics of multiple Concurrent Error Detection (CED) configurations, a method for robust detection of faults is developed based on pairwise parallel evaluation using Discrepancy Mirror logic. The results from the analytical FPGA model are demonstrated via a self-healing, self-organizing evolvable hardware system. Reconfigurability of the SRAM-based FPGA is leveraged to identify logic resource faults which are successively excluded by group testing using alternate device configurations. This simplifies the system architect\u27s role to definition of functionality using a high-level Hardware Description Language (HDL) and system-level performance versus availability operating point. System availability, throughput, and mean time to isolate faults are monitored and maintained using an Observer-Controller model. Results are demonstrated using a Data Encryption Standard (DES) core that occupies approximately 305 FPGA slices on a Xilinx Virtex-II Pro FPGA. With a single simulated stuck-at-fault, the system identifies a completely validated replacement configuration within three to five positive tests. The approach demonstrates a readily-implemented yet robust organic hardware application framework featuring a high degree of autonomous self-control

    Optimal Sensor Selection for Health Monitoring Systems

    Get PDF
    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented

    Optimal sensor placement for sewer capacity risk management

    Get PDF
    2019 Spring.Includes bibliographical references.Complex linear assets, such as those found in transportation and utilities, are vital to economies, and in some cases, to public health. Wastewater collection systems in the United States are vital to both. Yet effective approaches to remediating failures in these systems remains an unresolved shortfall for system operators. This shortfall is evident in the estimated 850 billion gallons of untreated sewage that escapes combined sewer pipes each year (US EPA 2004a) and the estimated 40,000 sanitary sewer overflows and 400,000 backups of untreated sewage into basements (US EPA 2001). Failures in wastewater collection systems can be prevented if they can be detected in time to apply intervention strategies such as pipe maintenance, repair, or rehabilitation. This is the essence of a risk management process. The International Council on Systems Engineering recommends that risks be prioritized as a function of severity and occurrence and that criteria be established for acceptable and unacceptable risks (INCOSE 2007). A significant impediment to applying generally accepted risk models to wastewater collection systems is the difficulty of quantifying risk likelihoods. These difficulties stem from the size and complexity of the systems, the lack of data and statistics characterizing the distribution of risk, the high cost of evaluating even a small number of components, and the lack of methods to quantify risk. This research investigates new methods to assess risk likelihood of failure through a novel approach to placement of sensors in wastewater collection systems. The hypothesis is that iterative movement of water level sensors, directed by a specialized metaheuristic search technique, can improve the efficiency of discovering locations of unacceptable risk. An agent-based simulation is constructed to validate the performance of this technique along with testing its sensitivity to varying environments. The results demonstrated that a multi-phase search strategy, with a varying number of sensors deployed in each phase, could efficiently discover locations of unacceptable risk that could be managed via a perpetual monitoring, analysis, and remediation process. A number of promising well-defined future research opportunities also emerged from the performance of this research

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    Maximum risk reduction with a fixed budget in the railway industry

    Get PDF
    Decision-makers in safety-critical industries such as the railways are frequently faced with the complexity of selecting technological, procedural and operational solutions to minimise staff, passengers and third partiesā€™ safety risks. In reality, the options for maximising risk reduction are limited by time and budget constraints as well as performance objectives. Maximising risk reduction is particularly necessary in the times of economic recession where critical services such as those on the UK rail network are not immune to budget cuts. This dilemma is further complicated by statutory frameworks stipulating ā€˜suitable and sufficientā€™ risk assessments and constraints such as ā€˜as low as reasonably practicableā€™. These significantly influence risk reduction option selection and influence their effective implementation. This thesis provides extensive research in this area and highlights the limitations of widely applied practices. These practices have limited significance on fundamental engineering principles and become impracticable when a constraint such as a fixed budget is applied ā€“ this is the current reality of UK rail network operations and risk management. This thesis identifies three main areas of weaknesses to achieving the desired objectives with current risk reduction methods as: Inaccurate, and unclear problem definition; Option evaluation and selection removed from implementation subsequently resulting in misrepresentation of risks and costs; Use of concepts and methods that are not based on fundamental engineering principles, not verifiable and with resultant sub-optimal solutions. Although not solely intended for a single industrial sector, this thesis focuses on guiding the railway risk decision-maker by providing clear categorisation of measures used on railways for risk reduction. This thesis establishes a novel understanding of risk reduction measuresā€™ application limitations and respective strengths. This is achieved by applying ā€˜key generic engineering principlesā€™ to measures employed for risk reduction. A comprehensive study of their preventive and protective capability in different configurations is presented. Subsequently, the fundamental understanding of risk reduction measures and their railway applications, the ā€˜cost-of-failureā€™ (CoF), ā€˜risk reduction readinessā€™ (RRR), ā€˜design-operationalprocedural-technicalā€™ (DOPT) concepts are developed for rational and cost-effective risk reduction. These concepts are shown to be particularly relevant to cases where blind applications of economic and mathematical theories are misleading and detrimental to engineering risk management. The case for successfully implementing this framework for maximum risk reduction within a fixed budget is further strengthened by applying, for the first time in railway risk reduction applications, the dynamic programming technique based on practical railway examples
    • ā€¦
    corecore