The Optimal Steady-State Control Problem

Abstract

Many engineering systems -- including electrical power networks, chemical processing plants, and communication networks -- have a well-defined notion of an "optimal'" steady-state operating point. This optimal operating point is often defined mathematically as the solution of a constrained optimization problem that seeks to minimize the monetary cost of distributing electricity, maximize the profit of chemical production, or minimize the communication latency between agents in a network. Optimal steady-state regulation is obviously of crucial importance in such systems. This thesis is concerned with the optimal steady-state control problem, the problem of designing a controller to continuously and automatically regulate a dynamical system to an optimal operating point that minimizes cost while satisfying equipment constraints and other engineering requirements, even as this optimal operating point changes with time. An optimal steady-state controller must simultaneously solve the optimization problem and force the plant to track its solution. This thesis makes two primary contributions. The first is a general problem definition and controller architecture for optimal steady-state control for nonlinear systems subject to time-varying exogenous inputs. We leverage output regulation theory to define the problem and provide necessary and sufficient conditions on any optimal steady-state controller. Regarding our controller architecture, the typical controller in the output regulation literature consists of two components: an internal model and a stabilizer. Inspired by this division, we propose that a typical optimal steady-state controller should consist of three pieces: an optimality model, an internal model, and a stabilizer. We show that our design framework encompasses many existing controllers from the literature. The second contribution of this thesis is a complete constructive solution to an important special case of optimal steady-state control: the linear-convex case, when the plant is an uncertain linear time-invariant system subject to constant exogenous inputs and the optimization problem is convex. We explore the requirements on the plant and optimization problem that allow for optimal regulation even in the presence of parametric uncertainty, and we explore methods for stabilizer design using tools from robust control theory. We illustrate the linear-convex theory on several examples. We first demonstrate the use of the small-gain theorem for stability analysis when a PI stabilizer is employed; we then show that we can use the solution to the H-infinity control problem to synthesize a stabilizer when the PI controller fails. Furthermore, we apply our theory to the design of controllers for the optimal frequency regulation problem in power systems and show that our methods recover standard designs from the literature

    Similar works