6,077 research outputs found

    Mathematical control of complex systems

    Get PDF
    Copyright Β© 2013 ZidongWang et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

    On Stochastic Model Predictive Control with Bounded Control Inputs

    Full text link
    This paper is concerned with the problem of Model Predictive Control and Rolling Horizon Control of discrete-time systems subject to possibly unbounded random noise inputs, while satisfying hard bounds on the control inputs. We use a nonlinear feedback policy with respect to noise measurements and show that the resulting mathematical program has a tractable convex solution in both cases. Moreover, under the assumption that the zero-input and zero-noise system is asymptotically stable, we show that the variance of the state, under the resulting Model Predictive Control and Rolling Horizon Control policies, is bounded. Finally, we provide some numerical examples on how certain matrices in the underlying mathematical program can be calculated off-line.Comment: 8 page

    Model Predictive Control: Multivariable Control Technique of Choice in the 1990s?

    Get PDF
    The state space and input/output formulations of model predictive control are compared and preference is given to the former because of the industrial interest in multivariable constrained problems. Recently, by abandoning the assumption of a finite output horizon several researchers have derived powerful stability results for linear and nonlinear systems with and without constraints, for the nominal case and in the presence of model uncertainty. Some of these results are reviewed. Optimistic speculations about the future of MPC conclude the paper

    Optimal control of linear, stochastic systems with state and input constraints

    Get PDF
    In this paper we extend the work presented in our previous papers (2001) where we considered optimal control of a linear, discrete time system subject to input constraints and stochastic disturbances. Here we basically look at the same problem but we additionally consider state constraints. We discuss several approaches for incorporating state constraints in a stochastic optimal control problem. We consider in particular a soft-constraint on the state constraints where constraint violation is punished by a hefty penalty in the cost function. Because of the stochastic nature of the problem, the penalty on the state constraint violation can not be made arbitrary high. We derive a condition on the growth of the state violation cost that has to be satisfied for the optimization problem to be solvable. This condition gives a link between the problem that we consider and the well known H∞H_\infty control problem
    • …
    corecore