9,889 research outputs found

    Robust model-based fault estimation and fault-tolerant control : towards an integration

    Get PDF
    To maintain robustly acceptable system performance, fault estimation (FE) is adopted to reconstruct fault signals and a fault-tolerant control (FTC) controller is employed to compensate for the fault effects. The inevitably existing system and estimation uncertainties result in the so-called bi-directional robustness interactions defined in this work between the FE and FTC functions, which gives rise to an important and challenging yet open integrated FE/FTC design problem concerned in this thesis. An example of fault-tolerant wind turbine pitch control is provided as a practical motivation for integrated FE/FTC design.To achieve the integrated FE/FTC design for linear systems, two strategies are proposed. A H∞ optimization based approach is first proposed for linear systems with differentiable matched faults, using augmented state unknown input observer FE and adaptive sliding mode FTC. The integrated design is converted into an observer-based robust control problem solved via a single-step linear matrix inequality formulation.With the purpose of an integrated design with more freedom and also applicable for a range of general fault scenarios, a decoupling approach is further proposed. This approach can estimate and compensate unmatched non-differentiable faults and perturbations by combined adaptive sliding mode augmented state unknown input observer and backstepping FTC controller. The observer structure renders a recovery of the Separation Principle and allows great freedom for the FE/FTC designs.Integrated FE/FTC design strategies are also developed for Takagi-Sugeno fuzzy modelling nonlinear systems, Lipschitz nonlinear systems, and large-scale interconnected systems, based on extensions of the H∞ optimization approach for linear systems.Tutorial examples are used to illustrate the design strategies for each approach. Physical systems, a 3-DOF (degree-of-freedom) helicopter and a 3-machine power system, are used to provide further evaluation of the proposed integrated FE/FTC strategies. Future research on this subject is also outlined

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    Data Systems Fault Coping for Real-time Big Data Analytics Required Architectural Crucibles

    Get PDF
    This paper analyzes the properties and characteristics of unknown and unexpected faults introduced into information systems while processing Big Data in real-time. The authors hypothesize that there are new faults, and requirements for fault handling and propose an analytic model and architectural framework to assess and manage the faults and mitigate the risks of correlating or integrating otherwise uncorrelated Big Data, and to ensure the source pedigree, quality, set integrity, freshness, and validity of data being consumed. We argue that new architectures, methods, and tools for handling and analyzing Big Data systems functioning in real-time must design systems that address and mitigate concerns for faults resulting from real-time streaming processes while ensuring that variables such as synchronization, redundancy, and latency are addressed. This paper concludes that with improved designs, real-time Big Data systems may continuously deliver the value and benefits of streaming Big Data

    Screening interacting factors in a wireless network testbed using locating arrays

    Get PDF
    Wireless systems exhibit a wide range of configurable parameters (factors), each with a number of values (levels), that may influence performance. Exhaustively analyzing all factor interactions is typically not feasible in experimental systems due to the large design space. We propose a method for determining which factors play a significant role in wireless network performance with multiple performance metrics (response variables). Such screening can be used to reduce the set of factors in subsequent experimental testing, whether for modelling or optimization. Our method accounts for pairwise interactions between the factors when deciding significance, because interactions play a significant role in real-world systems. We utilize locating arrays to design the experiment because they guarantee that each pairwise interaction impacts a distinct set of tests. We formulate the analysis as a problem in compressive sensing that we solve using a variation of orthogonal matching pursuit, together with statistical methods to determine which factors are significant. We evaluate the method using data collected from the w-iLab.t Zwijnaarde wireless network testbed and construct a new experiment based on the first analysis to validate the results. We find that the analysis exhibits robustness to noise and to missing data

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 1: Army fault tolerant architecture overview

    Get PDF
    Digital computing systems needed for Army programs such as the Computer-Aided Low Altitude Helicopter Flight Program and the Armored Systems Modernization (ASM) vehicles may be characterized by high computational throughput and input/output bandwidth, hard real-time response, high reliability and availability, and maintainability, testability, and producibility requirements. In addition, such a system should be affordable to produce, procure, maintain, and upgrade. To address these needs, the Army Fault Tolerant Architecture (AFTA) is being designed and constructed under a three-year program comprised of a conceptual study, detailed design and fabrication, and demonstration and validation phases. Described here are the results of the conceptual study phase of the AFTA development. Given here is an introduction to the AFTA program, its objectives, and key elements of its technical approach. A format is designed for representing mission requirements in a manner suitable for first order AFTA sizing and analysis, followed by a discussion of the current state of mission requirements acquisition for the targeted Army missions. An overview is given of AFTA's architectural theory of operation

    Langley aerospace test highlights, 1985

    Get PDF
    The role of the Langley Research Center is to perform basic and applied research necessary for the advancement of aeronautics and space flight, to generate new and advanced concepts for the accomplishment of related national goals, and to provide research advice, technological support, and assistance to other NASA installations, other government agencies, and industry. Significant tests which were performed during calendar year 1985 in Langley test facilities, are highlighted. Both the broad range of the research and technology activities at the Langley Research Center and the contributions of this work toward maintaining United States leadership in aeronautics and space research, are illustrated. Other highlights of Langley research and technology for 1985 are described in Research and Technology-1985 Annual Report of the Langley Research Center

    Robust de-centralized control and estimation for inter-connected systems

    Get PDF
    The thesis is concerned with the theoretical development of the control of inter-connected systems to achieve the whole overall stability and specific performance. A special included feature is the Fault-Tolerant Control (FTC) problem for the inter-connected system in terms of local subsystem actuator fault estimation. Hence, the thesis describes the main FTC challenges of distributed control of uncertain non-linear inter-connected systems. The basic principle adopted throughout the work is that the controller has two components, one involving the nominal control with unmatched components including uncertainties and disturbances. The second controller dealing with matched components including uncertainties and actuator faults.The main contributions of the thesis are summarised as follows:- The non-linear inter-connected systems are controlled by two controllers. The linear part via a linear matrix inequality (LMI) technique and the discontinuous part by using Integral Sliding Mode Control (ISMC) based on state feedback control.- The development of a new observer-based state estimate control strategy for non-linear inter-connected systems. The technique is applied either to every individual subsystem or to the whole as one shot system.- A new proposal of Adaptive Output Integral Sliding Mode Control (AOISMC) based only on output information plus static output feedback control is designed via an LMI formulation to control non-linear inter-connected systems. The new method is verified by application to a mathematical example representing an electrical power generator.- The development of a new method to design a dynamic control based on an LMI framework with Output Integral Sliding Mode Control (OISMC) to improve the stability and performance.- Using the above framework, making use of LMI tools and ISMC, a method of on-line actuator fault estimation has been proposed using the Proportional Multiple Integral Observer (PMIO) for fault estimation applicable to non-linear inter-connected systems
    corecore