56,313 research outputs found
ORACLS: A system for linear-quadratic-Gaussian control law design
A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model
The Optimal Steady-State Control Problem
Many engineering systems -- including electrical power networks, chemical processing plants, and communication networks -- have a well-defined notion of an "optimal'" steady-state operating point. This optimal operating point is often defined mathematically as the solution of a constrained optimization problem that seeks to minimize the monetary cost of distributing electricity, maximize the profit of chemical production, or minimize the communication latency between agents in a network. Optimal steady-state regulation is obviously of crucial importance in such systems.
This thesis is concerned with the optimal steady-state control problem, the problem of designing a controller to continuously and automatically regulate a dynamical system to an optimal operating point that minimizes cost while satisfying equipment constraints and other engineering requirements, even as this optimal operating point changes with time. An optimal steady-state controller must simultaneously solve the optimization problem and force the plant to track its solution.
This thesis makes two primary contributions. The first is a general problem definition and controller architecture for optimal steady-state control for nonlinear systems subject to time-varying exogenous inputs. We leverage output regulation theory to define the problem and provide necessary and sufficient conditions on any optimal steady-state controller. Regarding our controller architecture, the typical controller in the output regulation literature consists of two components: an internal model and a stabilizer. Inspired by this division, we propose that a typical optimal steady-state controller should consist of three pieces: an optimality model, an internal model, and a stabilizer. We show that our design framework encompasses many existing controllers from the literature.
The second contribution of this thesis is a complete constructive solution to an important special case of optimal steady-state control: the linear-convex case, when the plant is an uncertain linear time-invariant system subject to constant exogenous inputs and the optimization problem is convex. We explore the requirements on the plant and optimization problem that allow for optimal regulation even in the presence of parametric uncertainty, and we explore methods for stabilizer design using tools from robust control theory.
We illustrate the linear-convex theory on several examples. We first demonstrate the use of the small-gain theorem for stability analysis when a PI stabilizer is employed; we then show that we can use the solution to the H-infinity control problem to synthesize a stabilizer when the PI controller fails. Furthermore, we apply our theory to the design of controllers for the optimal frequency regulation problem in power systems and show that our methods recover standard designs from the literature
Recommended from our members
Robust H2/H∞-state estimation for systems with error variance constraints: the continuous-time case
Copyright [1999] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.The paper is concerned with the state estimator design problem for perturbed linear continuous-time systems with H∞ norm and variance constraints. The perturbation is assumed to be time-invariant and norm-bounded and enters into both the state and measurement matrices. The problem we address is to design a linear state estimator such that, for all admissible measurable perturbations, the variance of the estimation error of each state is not more than the individual prespecified value, and the transfer function from disturbances to error state outputs satisfies the prespecified H∞ norm upper bound constraint, simultaneously. Existence conditions of the desired estimators are derived in terms of Riccati-type matrix inequalities, and the analytical expression of these estimators is also presented. A numerical example is provided to show the directness and effectiveness of the proposed design approac
An internal model approach to (optimal) frequency regulation in power grids with time-varying voltages
This paper studies the problem of frequency regulation in power grids under
unknown and possible time-varying load changes, while minimizing the generation
costs. We formulate this problem as an output agreement problem for
distribution networks and address it using incremental passivity and
distributed internal-model-based controllers. Incremental passivity enables a
systematic approach to study convergence to the steady state with zero
frequency deviation and to design the controller in the presence of
time-varying voltages, whereas the internal-model principle is applied to
tackle the uncertain nature of the loads.Comment: 16 pages. Abridged version appeared in the Proceedings of the 21st
International Symposium on Mathematical Theory of Networks and Systems, MTNS
2014, Groningen, the Netherlands. Submitted in December 201
MPC for tracking of piece-wise constant referente for constrained linear systems
16th IFAC World Congress. Praga (República Checa) 03/07/2005Model predictive control (MPC) is one of the few techniques which is able to handle with constraints on both state and input of the plant. The admissible evolution and asymptotically convergence of the closed loop system is ensured by means of a suitable choice of the terminal cost and terminal contraint. However, most of the existing results on MPC are designed for a regulation problem. If the desired steady state changes, the MPC controller must be redesigned to guarantee the feasibility of the optimization problem, the admissible evolution as well as the asymptotic stability. In this paper a novel formulation of the MPC is proposed to track varying references. This controller ensures the feasibility of the optimization problem, constraint satisfaction and asymptotic evolution of the system to any admissible steady-state. Hence, the proposed MPC controller ensures the offset free tracking of any sequence of piece-wise constant admissible set points. Moreover this controller requires the solution of a single QP at each sample time, it is not a switching controller and improves the performance of the closed loop system
A Separation Principle on Lie Groups
For linear time-invariant systems, a separation principle holds: stable
observer and stable state feedback can be designed for the time-invariant
system, and the combined observer and feedback will be stable. For non-linear
systems, a local separation principle holds around steady-states, as the
linearized system is time-invariant. This paper addresses the issue of a
non-linear separation principle on Lie groups. For invariant systems on Lie
groups, we prove there exists a large set of (time-varying) trajectories around
which the linearized observer-controler system is time-invariant, as soon as a
symmetry-preserving observer is used. Thus a separation principle holds around
those trajectories. The theory is illustrated by a mobile robot example, and
the developed ideas are then extended to a class of Lagrangian mechanical
systems on Lie groups described by Euler-Poincare equations.Comment: Submitted to IFAC 201
A Unified Filter for Simultaneous Input and State Estimation of Linear Discrete-time Stochastic Systems
In this paper, we present a unified optimal and exponentially stable filter
for linear discrete-time stochastic systems that simultaneously estimates the
states and unknown inputs in an unbiased minimum-variance sense, without making
any assumptions on the direct feedthrough matrix. We also derive input and
state observability/detectability conditions, and analyze their connection to
the convergence and stability of the estimator. We discuss two variations of
the filter and their optimality and stability properties, and show that filters
in the literature, including the Kalman filter, are special cases of the filter
derived in this paper. Finally, illustrative examples are given to demonstrate
the performance of the unified unbiased minimum-variance filter.Comment: Preprint for Automatic
- …