557 research outputs found

    Observer-based robust fault estimation for fault-tolerant control

    Get PDF
    A control system is fault-tolerant if it possesses the capability of optimizing the system stability and admissible performance subject to bounded faults, complexity and modeling uncertainty. Based on this definition this thesis is concerned with the theoretical developments of the combination of robust fault estimation (FE) and robust active fault tolerant control (AFTC) for systems with both faults and uncertainties.This thesis develops robust strategies for AFTC involving a joint problem of on-line robust FE and robust adaptive control. The disturbances and modeling uncertainty affect the FE and FTC performance. Hence, the proposed robust observer-based fault estimator schemes are combined with several control methods to achieve the desired system performance and robust active fault tolerance. The controller approaches involve concepts of output feedback control, adaptive control, robust observer-based state feedback control. A new robust FE method has been developed initially to take into account the joint effect of both fault and disturbance signals, thereby rejecting the disturbances and enhancing the accuracy of the fault estimation. This is then extended to encompass the robustness with respect to modeling uncertainty.As an extension to the robust FE and FTC scheme a further development is made for direct application to smooth non-linear systems via the use of linear parameter-varying systems (LPV) modeling.The main contributions of the research are thus:- The development of a robust observer-based FE method and integration design for the FE and AFTC systems with the bounded time derivative fault magnitudes, providing the solution based on linear matrix inequality (LMI) methodology. A stability proof for the integrated design of the robust FE within the FTC system.- An improvement is given to the proposed robust observer-based FE method and integrated design for FE and AFTC systems under the existence of different disturbance structures.- New guidance for the choice of learning rate of the robust FE algorithm.- Some improvement compared with the recent literature by considering the FTC problem in a more general way, for example by using LPV modeling

    A distributed networked approach for fault detection of large-scale systems

    Get PDF
    Networked systems present some key new challenges in the development of fault diagnosis architectures. This paper proposes a novel distributed networked fault detection methodology for large-scale interconnected systems. The proposed formulation incorporates a synchronization methodology with a filtering approach in order to reduce the effect of measurement noise and time delays on the fault detection performance. The proposed approach allows the monitoring of multi-rate systems, where asynchronous and delayed measurements are available. This is achieved through the development of a virtual sensor scheme with a model-based re-synchronization algorithm and a delay compensation strategy for distributed fault diagnostic units. The monitoring architecture exploits an adaptive approximator with learning capabilities for handling uncertainties in the interconnection dynamics. A consensus-based estimator with timevarying weights is introduced, for improving fault detectability in the case of variables shared among more than one subsystem. Furthermore, time-varying threshold functions are designed to prevent false-positive alarms. Analytical fault detectability sufficient conditions are derived and extensive simulation results are presented to illustrate the effectiveness of the distributed fault detection technique

    A Distributed Networked Approach for Fault Detection of Large-scale Systems

    Get PDF
    Networked systems present some key new challenges in the development of fault diagnosis architectures. This paper proposes a novel distributed networked fault detection methodology for large-scale interconnected systems. The proposed formulation incorporates a synchronization methodology with a filtering approach in order to reduce the effect of measurement noise and time delays on the fault detection performance. The proposed approach allows the monitoring of multi-rate systems, where asynchronous and delayed measurements are available. This is achieved through the development of a virtual sensor scheme with a model-based re-synchronization algorithm and a delay compensation strategy for distributed fault diagnostic units. The monitoring architecture exploits an adaptive approximator with learning capabilities for handling uncertainties in the interconnection dynamics. A consensus-based estimator with timevarying weights is introduced, for improving fault detectability in the case of variables shared among more than one subsystem. Furthermore, time-varying threshold functions are designed to prevent false-positive alarms. Analytical fault detectability sufficient conditions are derived and extensive simulation results are presented to illustrate the effectiveness of the distributed fault detection technique

    Optimised configuration of sensing elements for control and fault tolerance applied to an electro-magnetic suspension system

    Get PDF
    New technological advances and the requirements to increasingly abide by new safety laws in engineering design projects highly affects industrial products in areas such as automotive, aerospace and railway industries. The necessity arises to design reduced-cost hi-tech products with minimal complexity, optimal performance, effective parameter robustness properties, and high reliability with fault tolerance. In this context the control system design plays an important role and the impact is crucial relative to the level of cost efficiency of a product. Measurement of required information for the operation of the design control system in any product is a vital issue, and in such cases a number of sensors can be available to select from in order to achieve the desired system properties. However, for a complex engineering system a manual procedure to select the best sensor set subject to the desired system properties can be very complicated, time consuming or even impossible to achieve. This is more evident in the case of large number of sensors and the requirement to comply with optimum performance. The thesis describes a comprehensive study of sensor selection for control and fault tolerance with the particular application of an ElectroMagnetic Levitation system (being an unstable, nonlinear, safety-critical system with non-trivial control performance requirements). The particular aim of the presented work is to identify effective sensor selection frameworks subject to given system properties for controlling (with a level of fault tolerance) the MagLev suspension system. A particular objective of the work is to identify the minimum possible sensors that can be used to cover multiple sensor faults, while maintaining optimum performance with the remaining sensors. The tools employed combine modern control strategies and multiobjective constraint optimisation (for tuning purposes) methods. An important part of the work is the design and construction of a 25kg MagLev suspension to be used for experimental verification of the proposed sensor selection frameworks

    Fault Detection and Isolation (Fdi) Via Neural Networks

    Get PDF
    Abstract Recent approaches to fault detection and isolation for dynamic systems using methods of integrating quantitative and qualitative model information, based upon soft computing (SC) methods are used. In this study, the use of SC methods is considered an important extension to the quantitative model-based approach for residual generation in FDI. When quantitative models are not readily available, a correctly trained neural network (NN) can be used as a non-linear dynamic model of the system. However, the neural network does not easily provide insight into model. This main difficulty can be overcome using qualitative modeling or rule-based inference methods. The paper presents the properties of several methods of combining quantitative and qualitative system information and their practical value for fault diagnosis of Neural network

    An Online Adaptive Machine Learning Framework for Autonomous Fault Detection

    Get PDF
    The increasing complexity and autonomy of modern systems, particularly in the aerospace industry, demand robust and adaptive fault detection and health management solutions. The development of a data-driven fault detection system that can adapt to varying conditions and system changes is critical to the performance, safety, and reliability of these systems. This dissertation presents a novel fault detection approach based on the integration of the artificial immune system (AIS) paradigm and Online Support Vector Machines (OSVM). Together, these algorithms create the Artificial Immune System augemented Online Support Vector Machine (AISOSVM). The AISOSVM framework combines the strengths of the AIS and OSVM to create a fault detection system that can effectively identify faults in complex systems while maintaining adaptability. The framework is designed using Model-Based Systems Engineering (MBSE) principles, employing the Capella tool and the Arcadia methodology to develop a structured, integrated approach for the design and deployment of the data-driven fault detection system. A key contribution of this research is the development of a Clonal Selection Algorithm that optimizes the OSVM hyperparameters and the V-Detector algorithm parameters, resulting in a more effective fault detection solution. The integration of the AIS in the training process enables the generation of synthetic abnormal data, mitigating the need for engineers to gather large amounts of failure data, which can be impractical. The AISOSVM also incorporates incremental learning and decremental unlearning for the Online Support Vector Machine, allowing the system to adapt online using lightweight computational processes. This capability significantly improves the efficiency of fault detection systems, eliminating the need for offline retraining and redeployment. Reinforcement Learning (RL) is proposed as a promising future direction for the AISOSVM, as it can help autonomously adapt the system performance in near real-time, further mitigating the need for acquiring large amounts of system data for training, and improving the efficiency of the adaptation process by intelligently selecting the best samples to learn from. The AISOSVM framework was applied to real-world scenarios and platform models, demonstrating its effectiveness and adaptability in various use cases. The combination of the AIS and OSVM, along with the online learning and RL integration, provides a robust and adaptive solution for fault detection and health management in complex autonomous systems. This dissertation presents a significant contribution to the field of fault detection and health management by integrating the artificial immune system paradigm with Online Support Vector Machines, developing a structured, integrated approach for designing and deploying data-driven fault detection systems, and implementing reinforcement learning for online, autonomous adaptation of fault management systems. The AISOSVM framework offers a promising solution to address the challenges of fault detection in complex, autonomous systems, with potential applications in a wide range of industries beyond aerospace

    Engineering Resilient Space Systems

    Get PDF
    Several distinct trends will influence space exploration missions in the next decade. Destinations are becoming more remote and mysterious, science questions more sophisticated, and, as mission experience accumulates, the most accessible targets are visited, advancing the knowledge frontier to more difficult, harsh, and inaccessible environments. This leads to new challenges including: hazardous conditions that limit mission lifetime, such as high radiation levels surrounding interesting destinations like Europa or toxic atmospheres of planetary bodies like Venus; unconstrained environments with navigation hazards, such as free-floating active small bodies; multielement missions required to answer more sophisticated questions, such as Mars Sample Return (MSR); and long-range missions, such as Kuiper belt exploration, that must survive equipment failures over the span of decades. These missions will need to be successful without a priori knowledge of the most efficient data collection techniques for optimum science return. Science objectives will have to be revised ‘on the fly’, with new data collection and navigation decisions on short timescales. Yet, even as science objectives are becoming more ambitious, several critical resources remain unchanged. Since physics imposes insurmountable light-time delays, anticipated improvements to the Deep Space Network (DSN) will only marginally improve the bandwidth and communications cadence to remote spacecraft. Fiscal resources are increasingly limited, resulting in fewer flagship missions, smaller spacecraft, and less subsystem redundancy. As missions visit more distant and formidable locations, the job of the operations team becomes more challenging, seemingly inconsistent with the trend of shrinking mission budgets for operations support. How can we continue to explore challenging new locations without increasing risk or system complexity? These challenges are present, to some degree, for the entire Decadal Survey mission portfolio, as documented in Vision and Voyages for Planetary Science in the Decade 2013–2022 (National Research Council, 2011), but are especially acute for the following mission examples, identified in our recently completed KISS Engineering Resilient Space Systems (ERSS) study: 1. A Venus lander, designed to sample the atmosphere and surface of Venus, would have to perform science operations as components and subsystems degrade and fail; 2. A Trojan asteroid tour spacecraft would spend significant time cruising to its ultimate destination (essentially hibernating to save on operations costs), then upon arrival, would have to act as its own surveyor, finding new objects and targets of opportunity as it approaches each asteroid, requiring response on short notice; and 3. A MSR campaign would not only be required to perform fast reconnaissance over long distances on the surface of Mars, interact with an unknown physical surface, and handle degradations and faults, but would also contain multiple components (launch vehicle, cruise stage, entry and landing vehicle, surface rover, ascent vehicle, orbiting cache, and Earth return vehicle) that dramatically increase the need for resilience to failure across the complex system. The concept of resilience and its relevance and application in various domains was a focus during the study, with several definitions of resilience proposed and discussed. While there was substantial variation in the specifics, there was a common conceptual core that emerged—adaptation in the presence of changing circumstances. These changes were couched in various ways—anomalies, disruptions, discoveries—but they all ultimately had to do with changes in underlying assumptions. Invalid assumptions, whether due to unexpected changes in the environment, or an inadequate understanding of interactions within the system, may cause unexpected or unintended system behavior. A system is resilient if it continues to perform the intended functions in the presence of invalid assumptions. Our study focused on areas of resilience that we felt needed additional exploration and integration, namely system and software architectures and capabilities, and autonomy technologies. (While also an important consideration, resilience in hardware is being addressed in multiple other venues, including 2 other KISS studies.) The study consisted of two workshops, separated by a seven-month focused study period. The first workshop (Workshop #1) explored the ‘problem space’ as an organizing theme, and the second workshop (Workshop #2) explored the ‘solution space’. In each workshop, focused discussions and exercises were interspersed with presentations from participants and invited speakers. The study period between the two workshops was organized as part of the synthesis activity during the first workshop. The study participants, after spending the initial days of the first workshop discussing the nature of resilience and its impact on future science missions, decided to split into three focus groups, each with a particular thrust, to explore specific ideas further and develop material needed for the second workshop. The three focus groups and areas of exploration were: 1. Reference missions: address/refine the resilience needs by exploring a set of reference missions 2. Capability survey: collect, document, and assess current efforts to develop capabilities and technology that could be used to address the documented needs, both inside and outside NASA 3. Architecture: analyze the impact of architecture on system resilience, and provide principles and guidance for architecting greater resilience in our future systems The key product of the second workshop was a set of capability roadmaps pertaining to the three reference missions selected for their representative coverage of the types of space missions envisioned for the future. From these three roadmaps, we have extracted several common capability patterns that would be appropriate targets for near-term technical development: one focused on graceful degradation of system functionality, a second focused on data understanding for science and engineering applications, and a third focused on hazard avoidance and environmental uncertainty. Continuing work is extending these roadmaps to identify candidate enablers of the capabilities from the following three categories: architecture solutions, technology solutions, and process solutions. The KISS study allowed a collection of diverse and engaged engineers, researchers, and scientists to think deeply about the theory, approaches, and technical issues involved in developing and applying resilience capabilities. The conclusions summarize the varied and disparate discussions that occurred during the study, and include new insights about the nature of the challenge and potential solutions: 1. There is a clear and definitive need for more resilient space systems. During our study period, the key scientists/engineers we engaged to understand potential future missions confirmed the scientific and risk reduction value of greater resilience in the systems used to perform these missions. 2. Resilience can be quantified in measurable terms—project cost, mission risk, and quality of science return. In order to consider resilience properly in the set of engineering trades performed during the design, integration, and operation of space systems, the benefits and costs of resilience need to be quantified. We believe, based on the work done during the study, that appropriate metrics to measure resilience must relate to risk, cost, and science quality/opportunity. Additional work is required to explicitly tie design decisions to these first-order concerns. 3. There are many existing basic technologies that can be applied to engineering resilient space systems. Through the discussions during the study, we found many varied approaches and research that address the various facets of resilience, some within NASA, and many more beyond. Examples from civil architecture, Department of Defense (DoD) / Defense Advanced Research Projects Agency (DARPA) initiatives, ‘smart’ power grid control, cyber-physical systems, software architecture, and application of formal verification methods for software were identified and discussed. The variety and scope of related efforts is encouraging and presents many opportunities for collaboration and development, and we expect many collaborative proposals and joint research as a result of the study. 4. Use of principled architectural approaches is key to managing complexity and integrating disparate technologies. The main challenge inherent in considering highly resilient space systems is that the increase in capability can result in an increase in complexity with all of the 3 risks and costs associated with more complex systems. What is needed is a better way of conceiving space systems that enables incorporation of capabilities without increasing complexity. We believe principled architecting approaches provide the needed means to convey a unified understanding of the system to primary stakeholders, thereby controlling complexity in the conception and development of resilient systems, and enabling the integration of disparate approaches and technologies. A representative architectural example is included in Appendix F. 5. Developing trusted resilience capabilities will require a diverse yet strategically directed research program. Despite the interest in, and benefits of, deploying resilience space systems, to date, there has been a notable lack of meaningful demonstrated progress in systems capable of working in hazardous uncertain situations. The roadmaps completed during the study, and documented in this report, provide the basis for a real funded plan that considers the required fundamental work and evolution of needed capabilities. Exploring space is a challenging and difficult endeavor. Future space missions will require more resilience in order to perform the desired science in new environments under constraints of development and operations cost, acceptable risk, and communications delays. Development of space systems with resilient capabilities has the potential to expand the limits of possibility, revolutionizing space science by enabling as yet unforeseen missions and breakthrough science observations. Our KISS study provided an essential venue for the consideration of these challenges and goals. Additional work and future steps are needed to realize the potential of resilient systems—this study provided the necessary catalyst to begin this process

    Activity Report 1996-97

    Get PDF

    An integrated diagnostic architecture for autonomous robots

    Get PDF
    Abstract unavailable please refer to PD
    • …
    corecore