159,845 research outputs found

    Learning the cost-to-go for mixed-integer nonlinear model predictive control

    Full text link
    Application of nonlinear model predictive control (NMPC) to problems with hybrid dynamical systems, disjoint constraints, or discrete controls often results in mixed-integer formulations with both continuous and discrete decision variables. However, solving mixed-integer nonlinear programming problems (MINLP) in real-time is challenging, which can be a limiting factor in many applications. To address the computational complexity of solving mixed integer nonlinear model predictive control problem in real-time, this paper proposes an approximate mixed integer NMPC formulation based on value function approximation. Leveraging Bellman's principle of optimality, the key idea here is to divide the prediction horizon into two parts, where the optimal value function of the latter part of the prediction horizon is approximated offline using expert demonstrations. Doing so allows us to solve the MINMPC problem with a considerably shorter prediction horizon online, thereby reducing the online computation cost. The paper uses an inverted pendulum example with discrete controls to illustrate this approach

    Online-Computation Approach to Optimal Control of Noise-Affected Nonlinear Systems with Continuous State and Control Spaces

    No full text
    © 2007 EUCA.A novel online-computation approach to optimal control of nonlinear, noise-affected systems with continuous state and control spaces is presented. In the proposed algorithm, system noise is explicitly incorporated into the control decision. This leads to superior results compared to state-of-the-art nonlinear controllers that neglect this influence. The solution of an optimal nonlinear controller for a corresponding deterministic system is employed to find a meaningful state space restriction. This restriction is obtained by means of approximate state prediction using the noisy system equation. Within this constrained state space, an optimal closed-loop solution for a finite decision-making horizon (prediction horizon) is determined within an adaptively restricted optimization space. Interleaving stochastic dynamic programming and value function approximation yields a solution to the considered optimal control problem. The enhanced performance of the proposed discrete-time controller is illustrated by means of a scalar example system. Nonlinear model predictive control is applied to address approximate treatment of infinite-horizon problems by the finite-horizon controller

    Reliable autonomous vehicle control - a chance constrained stochastic MPC approach

    Get PDF
    In recent years, there is a growing interest in the development of systems capable of performing tasks with a high level of autonomy without human supervision. This kind of systems are known as autonomous systems and have been studied in many industrial applications such as automotive, aerospace and industries. Autonomous vehicle have gained a lot of interest in recent years and have been considered as a viable solution to minimize the number of road accidents. Due to the complexity of dynamic calculation and the physical restrictions in autonomous vehicle, for example, deterministic model predictive control is an attractive control technique to solve the problem of path planning and obstacle avoidance. However, an autonomous vehicle should be capable of driving adaptively facing deterministic and stochastic events on the road. Therefore, control design for the safe, reliable and autonomous driving should consider vehicle model uncertainty as well uncertain external influences. The stochastic model predictive control scheme provides the most convenient scheme for the control of autonomous vehicles on moving horizons, where chance constraints are to be used to guarantee the reliable fulfillment of trajectory constraints and safety against static and random obstacles. To solve this kind of problems is known as chance constrained model predictive control. Thus, requires the solution of a chance constrained optimization on moving horizon. According to the literature, the major challenge for solving chance constrained optimization is to calculate the value of probability. As a result, approximation methods have been proposed for solving this task. In the present thesis, the chance constrained optimization for the autonomous vehicle is solved through approximation method, where the probability constraint is approximated by using a smooth parametric function. This methodology presents two approaches that allow the solution of chance constrained optimization problems in inner approximation and outer approximation. The aim of this approximation methods is to reformulate the chance constrained optimizations problems as a sequence of nonlinear programs. Finally, three case studies of autonomous vehicle for tracking and obstacle avoidance are presented in this work, in which three levels probability of reliability are considered for the optimal solution.Tesi

    Approximate solution of stochastic infinite horizon optimal control problems for constrained linear uncertain systems

    Full text link
    We propose a Model Predictive Control (MPC) with a single-step prediction horizon to solve infinite horizon optimal control problems with the expected sum of convex stage costs for constrained linear uncertain systems. The proposed method relies on two techniques. First, we estimate the expected values of the convex costs using a computationally tractable approximation, achieved by sampling across the space of disturbances. Second, we implement a data-driven approach to approximate the optimal value function and its corresponding domain, through systematic exploration of the system's state space. These estimates are subsequently used as the terminal cost and terminal set within the proposed MPC. We prove recursive feasibility, robust constraint satisfaction, and convergence in probability to the target set. Furthermore, we prove that the estimated value function converges to the optimal value function in a local region. The effectiveness of the proposed MPC is illustrated with detailed numerical simulations and comparisons with a value iteration method and a Learning MPC that minimizes a certainty equivalent cost.Comment: Submitted to the IEEE Transactions on Automatic Contro

    Online-Computation Approach to Optimal Control of Noise-Affected Nonlinear Systems with Continuous State and Control Spaces

    Get PDF
    A novel online-computation approach to optimal control of nonlinear, noise-affected systems with continuous state and control spaces is presented. In the proposed algorithm, system noise is explicitly incorporated into the control decision. This leads to superior results compared to state-of-the-art nonlinear controllers that neglect this influence. The solution of an optimal nonlinear controller for a corresponding deterministic system is employed to find a meaningful state space restriction. This restriction is obtained by means of approximate state prediction using the noisy system equation. Within this constrained state space, an optimal closed-loop solution for a finite decisionmaking horizon (prediction horizon) is determined within an adaptively restricted optimization space. Interleaving stochastic dynamic programming and value function approximation yields a solution to the considered optimal control problem. The enhanced performance of the proposed discrete-time controller is illustrated by means of a scalar example system. Nonlinear model predictive control is applied to address approximate treatment of infinite-horizon problems by the finite-horizon controller

    Approximate Dynamic Programming for Constrained Piecewise Affine Systems with Stability and Safety Guarantees

    Full text link
    Infinite-horizon optimal control of constrained piecewise affine (PWA) systems has been approximately addressed by hybrid model predictive control (MPC), which, however, has computational limitations, both in offline design and online implementation. In this paper, we consider an alternative approach based on approximate dynamic programming (ADP), an important class of methods in reinforcement learning. We accommodate non-convex union-of-polyhedra state constraints and linear input constraints into ADP by designing PWA penalty functions. PWA function approximation is used, which allows for a mixed-integer encoding to implement ADP. The main advantage of the proposed ADP method is its online computational efficiency. Particularly, we propose two control policies, which lead to solving a smaller-scale mixed-integer linear program than conventional hybrid MPC, or a single convex quadratic program, depending on whether the policy is implicitly determined online or explicitly computed offline. We characterize the stability and safety properties of the closed-loop systems, as well as the sub-optimality of the proposed policies, by quantifying the approximation errors of value functions and policies. We also develop an offline mixed-integer linear programming-based method to certify the reliability of the proposed method. Simulation results on an inverted pendulum with elastic walls and on an adaptive cruise control problem validate the control performance in terms of constraint satisfaction and CPU time
    corecore