15 research outputs found

    Lyapunov methods for time-invariant delay difference inclusions

    Get PDF
    Motivated by the fact that delay difference inclusions (DDIs) form a rich modeling class that includes, for example, uncertain time-delay systems and certain types of networked control systems, this paper provides a comprehensive collection of Lyapunov methods for DDIs. First, the Lyapunov–Krasovskii approach, which is an extension of the classical Lyapunov theory to time-delay systems, is considered. It is shown that a DDI is KL-stable if and only if it admits a Lyapunov–Krasovskii function (LKF). Second, the Lyapunov–Razumikhin method, which is a type of small-gain approach for time-delay systems, is studied. It is proved that a DDI is KL-stable if it admits a Lyapunov–Razumikhin function (LRF). Moreover, an example of a linear delay difference equation which is globally exponentially stable but does not admit an LRF is provided. Thus, it is established that the existence of an LRF is not a necessary condition for KL-stability of a DDI. Then, it is shown that the existence of an LRF is a sufficient condition for the existence of an LKF and that only under certain additional assumptions is the converse true. Furthermore, it is shown that an LRF induces a family of sets with certain contraction properties that are particular to time-delay systems. On the other hand, an LKF is shown to induce a type of contractive set similar to those induced by a classical Lyapunov function. The class of quadratic candidate functions is used to illustrate the results derived in this paper in terms of both LKFs and LRFs, respectively. Both stability analysis and stabilizing controller synthesis methods for linear DDIs are proposed

    A game theoretical model of traffic with multiple interacting drivers for use in autonomous vehicle development

    Get PDF
    This paper describes a game theoretical model of traffic where multiple drivers interact with each other. The model is developed using hierarchical reasoning, a game theoretical model of human behavior, and reinforcement learning. It is assumed that the drivers can observe only a partial state of the traffic they are in and therefore although the environment satisfies the Markov property, it appears as non-Markovian to the drivers. Hence, each driver implicitly has to find a policy, i.e. a mapping from observations to actions, for a Partially Observable Markov Decision Process. In this paper, a computationally tractable solution to this problem is provided by employing hierarchical reasoning together with a suitable reinforcement learning algorithm. Simulation results are reported, which demonstrate that the resulting driver models provide reasonable behavior for the given traffic scenarios. © 2016 American Automatic Control Council (AACC)

    Load governor for fuel cell oxygen starvation protection: a robust nonlinear reference governor approach

    No full text

    A sum-of-squares-based procedure to approximate the Pontryagin difference of basic semi-algebraic sets

    No full text
    The P-difference between two sets A and B is the set of all points, C, such that the sum of B to any of the points in C is contained in A. Such a set difference plays an important role in robust model predictive control and set-theoretic control. In this paper, we show that an inner approximation of the P-difference between two sets described by collections of polynomial inequalities can be computed using Sums of Squares Programming. The effectiveness of the procedure is shown with some computational examples.SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Explicit Reference Governor for Constrained Maneuver and Shape Control of a Seven-State Multibody Aircraft

    No full text
    info:eu-repo/semantics/publishe

    Horizon-1 predictive control of automotive electromagnetic actuators

    No full text
    Electromagnetically driven mechanical systems are characterized by fast nonlinear dynamics that are subject to physical and performance constraints, which makes controller design a challenging problem. Although model predictive control (MPC) is well suited for dealing with constraints, the fast dynamics of electromagnetic (EM) actuators render most standard MPC approaches impractical. This paper proposes a horizon-1 MPC strategy that can handle both the state/input constraints and the computational complexity limitations associated with EM actuator applications. A {flexible Lyapunov function} is employed to obtain a nonconservative stability guarantee for the horizon-1 MPC scheme. Moreover, an invariant region of attraction is provided for the closed-loop MPC system. The simulation results obtained on a validated model of an EM engine valve actuator show that performance is improved with respect to previous strategies, and that the proposed algorithm can run within a sampling period in the order of a millisecond

    Optimization and Optimal Control in Automotive Systems

    No full text
    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier approaches, based on some degree of heuristics, to the use of more and more common systematic methods. Even systematic methods can be developed and applied in a large number of forms so the text collects contributions from across the theory, methods and real-world automotive applications of optimization. Greater fuel economy, significant reductions in permissible emissions, new drivability requirements and the generally increasing complexity of automotive systems are among the criteria that the contributing authors set themselves to meet. In many cases multiple and often conflicting requirements give rise to multi-objective constrained optimization problems which are also considered. Some of these problems fall into the domain of the traditional multi-disciplinary optimization applied to system, sub-system or component design parameters and is performed based on system models; others require applications of optimization directly to experimental systems to determine either optimal calibration or the optimal control trajectory/control law. Optimization and Optimal Control in Automotive Systems reflects the state-of-the-art in and promotes a comprehensive approach to optimization in automotive systems by addressing its different facets, by discussing basic methods and showing practical approaches and specific applications of optimization to design and control problems for automotive systems. The book will be of interest both to academic researchers, either studying optimization or who have links with the automotive industry and to industrially-based engineers and automotive designers

    Explicit Reference Governor for the Constrained Control of Linear Time-Delay Systems

    No full text
    This paper introduces an explicit reference governor to supervise closed-loop linear time-delay systems. The proposed scheme enforces state and input constraints by modifying the reference of the supervised system so that the state vector always belongs to admissible sublevel sets of a suitably defined Lyapunov-Krasovskii functional. To accomplish this, this paper extends the existing definition of 'dynamic safety margin' to a time-delay setting and illustrates how to employ classic Lyapunov-Krasovskii functionals even though the reference is time varying. Constraint enforcement for arbitrary reference signals and asymptotic convergence to any strictly steady-state admissible set point is rigorously proven. Experimental results are reported to demonstrate the simplicity, practicality, and robustness of the proposed method.SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Approximating optimal finite horizon feedback by model predictive control

    No full text
    We consider a finite-horizon continuous-time optimal control problem with nonlinear dynamics, an integral cost, control constraints and a time-varying parameter which represents perturbations or uncertainty. After discretizing the problem we employ a Model Predictive Control (MPC) approach by first solving the problem over the entire remaining time horizon and then applying the first element of the optimal discrete-time control sequence, as a constant in time function, to the continuous-time system over the sampling interval. Then the state at the end of the sampling interval is measured (estimated) with certain error, and the process is repeated at each step over the remaining horizon. As a result, we obtain a piecewise constant function of time representing MPC-generated control signal. Hence MPC turns out to be an approximation to the optimal feedback control for the continuous-time system. In our main result we derive an estimate of the difference between the MPC-generated state and control trajectories and the optimal feedback generated state and control trajectories, both obtained for the same value of the perturbation parameter, in terms of the step-size of the discretizatihttps://pure.soton.ac.uk/admin/editor/dk/atira/pure/api/shared/model/researchoutput/editor/contributiontojournaleditor.xhtml#on and the measurement error. Numerical results illustrating our estimate are reported
    corecore