193,909 research outputs found

    Stochastic model predictive control of LPV systems via scenario optimization

    Get PDF
    A stochastic receding-horizon control approach for constrained Linear Parameter Varying discrete-time systems is proposed in this paper. It is assumed that the time-varying parameters have stochastic nature and that the system's matrices are bounded but otherwise arbitrary nonlinear functions of these parameters. No specific assumption on the statistics of the parameters is required. By using a randomization approach, a scenario-based finite-horizon optimal control problem is formulated, where only a finite number M of sampled predicted parameter trajectories (‘scenarios') are considered. This problem is convex and its solution is a priori guaranteed to be probabilistically robust, up to a user-defined probability level p. The p level is linked to M by an analytic relationship, which establishes a tradeoff between computational complexity and robustness of the solution. Then, a receding horizon strategy is presented, involving the iterated solution of a scenario-based finite-horizon control problem at each time step. Our key result is to show that the state trajectories of the controlled system reach a terminal positively invariant set in finite time, either deterministically, or with probability no smaller than p. The features of the approach are illustrated by a numerical example

    Robust Nonlinear Optimal Control via System Level Synthesis

    Full text link
    This paper addresses the problem of finite horizon constrained robust optimal control for nonlinear systems subject to norm-bounded disturbances. To this end, the underlying uncertain nonlinear system is decomposed based on a first-order Taylor series expansion into a nominal system and an error (deviation) described as an uncertain linear time-varying system. This decomposition allows us to leverage System Level Synthesis to jointly optimize an affine error feedback, a nominal nonlinear trajectory, and, most importantly, a dynamic linearization error over-bound used to ensure robust constraint satisfaction for the nonlinear system. The proposed approach thereby results in less conservative planning compared with state-of-the-art techniques. We demonstrate the benefits of the proposed approach to control the rotational motion of a rigid body subject to state and input constraints.Comment: submitted to IEEE Transactions on Automatic Control (TAC

    Nonlinear robust H∞ control.

    Get PDF
    A new theory is proposed for the full-information finite and infinite horizontime robust H∞ control that is equivalently effective for the regulation and/or tracking problems of the general class of time-varying nonlinear systems under the presence of exogenous disturbance inputs. The theory employs the sequence of linear-quadratic and time-varying approximations, that were recently introduced in the optimal control framework, to transform the nonlinear H∞ control problem into a sequence of linearquadratic robust H∞ control problems by using well-known results from the existing Riccati-based theory of the maturing classical linear robust control. The proposed method, as in the optimal control case, requires solving an approximating sequence of Riccati equations (ASRE), to find linear time-varying feedback controllers for such disturbed nonlinear systems while employing classical methods. Under very mild conditions of local Lipschitz continuity, these iterative sequences of solutions are known to converge to the unique viscosity solution of the Hamilton-lacobi-Bellman partial differential equation of the original nonlinear optimal control problem in the weak form (Cimen, 2003); and should hold for the robust control problems herein. The theory is analytically illustrated by directly applying it to some sophisticated nonlinear dynamical models of practical real-world applications. Under a r -iteration sense, such a theory gives the control engineer and designer more transparent control requirements to be incorporated a priori to fine-tune between robustness and optimality needs. It is believed, however, that the automatic state-regulation robust ASRE feedback control systems and techniques provided in this thesis yield very effective control actions in theory, in view of its computational simplicity and its validation by means of classical numerical techniques, and can straightforwardly be implemented in practice as the feedback controller is constrained to be linear with respect to its inputs

    The Optimal Steady-State Control Problem

    Get PDF
    Many engineering systems -- including electrical power networks, chemical processing plants, and communication networks -- have a well-defined notion of an "optimal'" steady-state operating point. This optimal operating point is often defined mathematically as the solution of a constrained optimization problem that seeks to minimize the monetary cost of distributing electricity, maximize the profit of chemical production, or minimize the communication latency between agents in a network. Optimal steady-state regulation is obviously of crucial importance in such systems. This thesis is concerned with the optimal steady-state control problem, the problem of designing a controller to continuously and automatically regulate a dynamical system to an optimal operating point that minimizes cost while satisfying equipment constraints and other engineering requirements, even as this optimal operating point changes with time. An optimal steady-state controller must simultaneously solve the optimization problem and force the plant to track its solution. This thesis makes two primary contributions. The first is a general problem definition and controller architecture for optimal steady-state control for nonlinear systems subject to time-varying exogenous inputs. We leverage output regulation theory to define the problem and provide necessary and sufficient conditions on any optimal steady-state controller. Regarding our controller architecture, the typical controller in the output regulation literature consists of two components: an internal model and a stabilizer. Inspired by this division, we propose that a typical optimal steady-state controller should consist of three pieces: an optimality model, an internal model, and a stabilizer. We show that our design framework encompasses many existing controllers from the literature. The second contribution of this thesis is a complete constructive solution to an important special case of optimal steady-state control: the linear-convex case, when the plant is an uncertain linear time-invariant system subject to constant exogenous inputs and the optimization problem is convex. We explore the requirements on the plant and optimization problem that allow for optimal regulation even in the presence of parametric uncertainty, and we explore methods for stabilizer design using tools from robust control theory. We illustrate the linear-convex theory on several examples. We first demonstrate the use of the small-gain theorem for stability analysis when a PI stabilizer is employed; we then show that we can use the solution to the H-infinity control problem to synthesize a stabilizer when the PI controller fails. Furthermore, we apply our theory to the design of controllers for the optimal frequency regulation problem in power systems and show that our methods recover standard designs from the literature

    Optimization-Based Power Management of Hybrid Power Systems with Applications in Advanced Hybrid Electric Vehicles and Wind Farms with Battery Storage

    Get PDF
    Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, ``brain\u27 of these ``hybrid\u27 systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with ``else-then-if\u27 logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory- constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC)

    Stochastic model predictive control for constrained networked control systems with random time delay

    Get PDF
    In this paper the continuous time stochastic constrained optimal control problem is formulated for the class of networked control systems assuming that time delays follow a discrete-time, finite Markov chain . Polytopic overapproximations of the system's trajectories are employed to produce a polyhedral inner approximation of the non-convex constraint set resulting from imposing the constraints in continuous time. The problem is cast in a Markov jump linear systems (MJLS) framework and a stochastic MPC controller is calculated explicitly, oine, coupling dynamic programming with parametric piecewise quadratic (PWQ) optimization. The calculated control law leads to stochastic stability of the closed loop system, in the mean square sense and respects the state and input constraints in continuous time
    corecore