8,489 research outputs found

    New advances in H∞ control and filtering for nonlinear systems

    Get PDF
    The main objective of this special issue is to summarise recent advances in H∞ control and filtering for nonlinear systems, including time-delay, hybrid and stochastic systems. The published papers provide new ideas and approaches, clearly indicating the advances made in problem statements, methodologies or applications with respect to the existing results. The special issue also includes papers focusing on advanced and non-traditional methods and presenting considerable novelties in theoretical background or experimental setup. Some papers present applications to newly emerging fields, such as network-based control and estimation

    Linearly Solvable Stochastic Control Lyapunov Functions

    Get PDF
    This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear partial differential equation can then be relaxed to a linear differential inclusion, allowing for relaxed solutions to be generated using sum of squares programming. The resulting relaxed solutions are in fact viscosity super/subsolutions, and by the maximum principle are pointwise upper and lower bounds to the underlying value function, even for coarse polynomial approximations. Furthermore, the pointwise upper bound is shown to be a stochastic control Lyapunov function, yielding a method for generating nonlinear controllers with pointwise bounded distance from the optimal cost when using the optimal controller. These approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems. Finally, this paper develops a-priori bounds on trajectory suboptimality when using these approximate value functions, as well as demonstrates that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing. Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio

    Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    Get PDF
    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated

    Stabilization of Networked Control Systems with Sparse Observer-Controller Networks

    Full text link
    In this paper we provide a set of stability conditions for linear time-invariant networked control systems with arbitrary topology, using a Lyapunov direct approach. We then use these stability conditions to provide a novel low-complexity algorithm for the design of a sparse observer-based control network. We employ distributed observers by employing the output of other nodes to improve the stability of each observer dynamics. To avoid unbounded growth of controller and observer gains, we impose bounds on their norms. The effects of relaxation of these bounds is discussed when trying to find the complete decentralization conditions

    Deep Reinforcement Learning for Wireless Sensor Scheduling in Cyber-Physical Systems

    Full text link
    In many Cyber-Physical Systems, we encounter the problem of remote state estimation of geographically distributed and remote physical processes. This paper studies the scheduling of sensor transmissions to estimate the states of multiple remote, dynamic processes. Information from the different sensors have to be transmitted to a central gateway over a wireless network for monitoring purposes, where typically fewer wireless channels are available than there are processes to be monitored. For effective estimation at the gateway, the sensors need to be scheduled appropriately, i.e., at each time instant one needs to decide which sensors have network access and which ones do not. To address this scheduling problem, we formulate an associated Markov decision process (MDP). This MDP is then solved using a Deep Q-Network, a recent deep reinforcement learning algorithm that is at once scalable and model-free. We compare our scheduling algorithm to popular scheduling algorithms such as round-robin and reduced-waiting-time, among others. Our algorithm is shown to significantly outperform these algorithms for many example scenarios

    Value of Information in Feedback Control

    Full text link
    In this article, we investigate the impact of information on networked control systems, and illustrate how to quantify a fundamental property of stochastic processes that can enrich our understanding about such systems. To that end, we develop a theoretical framework for the joint design of an event trigger and a controller in optimal event-triggered control. We cover two distinct information patterns: perfect information and imperfect information. In both cases, observations are available at the event trigger instantly, but are transmitted to the controller sporadically with one-step delay. For each information pattern, we characterize the optimal triggering policy and optimal control policy such that the corresponding policy profile represents a Nash equilibrium. Accordingly, we quantify the value of information VoIk\operatorname{VoI}_k as the variation in the cost-to-go of the system given an observation at time kk. Finally, we provide an algorithm for approximation of the value of information, and synthesize a closed-form suboptimal triggering policy with a performance guarantee that can readily be implemented

    Distributed Robust Set-Invariance for Interconnected Linear Systems

    Full text link
    We introduce a class of distributed control policies for networks of discrete-time linear systems with polytopic additive disturbances. The objective is to restrict the network-level state and controls to user-specified polyhedral sets for all times. This problem arises in many safety-critical applications. We consider two problems. First, given a communication graph characterizing the structure of the information flow in the network, we find the optimal distributed control policy by solving a single linear program. Second, we find the sparsest communication graph required for the existence of a distributed invariance-inducing control policy. Illustrative examples, including one on platooning, are presented.Comment: 8 Pages. Submitted to American Control Conference (ACC), 201
    corecore