320 research outputs found

    Suboptimal stabilizing controllers for linearly solvable system

    Get PDF
    This paper presents a novel method to synthesize stochastic control Lyapunov functions for a class of nonlinear, stochastic control systems. In this work, the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation is transformed into a linear partial differential equation for a class of systems with a particular constraint on the stochastic disturbance. It is shown that this linear partial differential equation can be relaxed to a linear differential inclusion, allowing for approximating polynomial solutions to be generated using sum of squares programming. It is shown that the resulting solutions are stochastic control Lyapunov functions with a number of compelling properties. In particular, a-priori bounds on trajectory suboptimality are shown for these approximate value functions. The result is a technique whereby approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems

    Linearly Solvable Stochastic Control Lyapunov Functions

    Get PDF
    This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear partial differential equation can then be relaxed to a linear differential inclusion, allowing for relaxed solutions to be generated using sum of squares programming. The resulting relaxed solutions are in fact viscosity super/subsolutions, and by the maximum principle are pointwise upper and lower bounds to the underlying value function, even for coarse polynomial approximations. Furthermore, the pointwise upper bound is shown to be a stochastic control Lyapunov function, yielding a method for generating nonlinear controllers with pointwise bounded distance from the optimal cost when using the optimal controller. These approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems. Finally, this paper develops a-priori bounds on trajectory suboptimality when using these approximate value functions, as well as demonstrates that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing. Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio

    ℋ∞ optimization with spatial constraints

    Get PDF
    A generalized ℋ∞ synthesis problem where non-euclidian spatial norms on the disturbances and output error are used is posed and solved. The solution takes the form of a linear matrix inequality. Some problems which fall into this class are presented. In particular, solutions are presented to two problems: a variant of ℋ∞ synthesis where norm constraints on each component of the disturbance can be imposed, and synthesis for a certain class of robust performance problems

    Optimal Controller Synthesis for Nonlinear Systems

    Get PDF
    Optimal controller synthesis is a challenging problem to solve. However, in many applications such as robotics, nonlinearity is unavoidable. Apart from optimality, correctness of the system behaviors with respect to system specifications such as stability and obstacle avoidance is vital for engineering applications. Many existing techniques consider either the optimality or the correctness of system behavior. Rarely, a tool exists that considers both. Furthermore, most existing optimal controller synthesis techniques are not scalable because they either require ad-hoc design or they suffer from the curse of dimensionality. This thesis aims to close these gaps by proposing optimal controller synthesis techniques for two classes of nonlinear systems: linearly solvable nonlinear systems and hybrid nonlinear systems. Linearly solvable systems have associated Hamilton- Jacobi-Bellman (HJB) equations that can be transformed from the original nonlinear partial differential equation (PDE) into a linear PDE through a logarithmic transformation. The first part of this thesis presets two methods to synthesize optimal controller for linearly solvable nonlinear systems. The first technique uses a hierarchy of sums-of-square programs to compute a sequence of suboptimal controllers that have non-increasing suboptimality for first exit and finite horizon problems. This technique is the first systematic approach to provide stability and suboptimal performance guarantees for stochastic nonlinear systems in one framework. The second technique uses the low rank tensor decomposition framework to solve the linear HJB equation for first exit, finite horizon, and infinite horizon problems. This technique scale linearly with dimensions, alleviating the curse of dimensionality and enabling us to solve the linear HJB equation for a quadcopter model that is a twelve-dimensional system on a personal laptop. A new algorithm is proposed for a key step in the controller synthesis algorithm to solve the ill-conditioning issue that arises in the original algorithm. A MATLAB toolbox that implements the algorithms is developed, and the performance of these algorithms is illustrated by a few engineering examples. Apart from stability, in many applications, more complex specifications such as obstacle avoidance, reachability, and surveillance are required. The second part of the thesis describes methods to synthesize optimal controllers for hybrid nonlinear systems with quantitative objectives (i.e., minimizing cost) and qualitative objectives (i.e., satisfying specifications). This thesis focuses on two types of qualitative objectives, regular objectives, and ω-regular objectives. Regular objectives capture bounded time behavior such as reachability, and &#969;-regular objectives capture long term behavior such as surveillance. For both types of objectives, an abstraction-refinement procedure that preserves the cost is developed. A two-player game is solved on the product of the abstract system and the given objectives to synthesize the suboptimal controller for the hybrid nonlinear system. By refining the abstract system, the algorithms are guaranteed to converge to the optimal cost and return the optimal controller if the original systems are robust with respect to the initial states and the optimal controller inputs. The proposed technique is the first abstraction-refinement based technique to combine both quantitative and qualitative objectives into one framework. A Python implementation of the algorithms are developed, and a few engineering examples are presented to illustrate the performance of these algorithms.</p

    From data and structure to models and controllers

    Get PDF
    Systems and control theory deals with analyzing dynamical systems and shaping their behavior by means of control. Dynamical systems are widespread, and control theory therefore has numerous applications ranging from the control of aircraft and spacecraft to chemical process control. During the last decades, a series of remarkable new control techniques have been developed. The majority of these techniques rely on mathematical models of the to-be-controlled system. However, the growing complexity of modern engineering systems complicates mathematical modeling. In this thesis, we therefore propose new methods to analyze and control dynamical systems without relying on a given system model. Models are thereby replaced by two other ingredients, namely measured data and system structure. In the first part of the thesis, we consider the problem of data-driven control. This problem involves the development of controllers for a dynamical system, purely on the basis of data. We consider both stabilizing controllers, and controllers that minimize a given cost function. Secondly, we focus on networked systems. A networked system is a collection of interconnected dynamical subsystems. For this type of systems, our aim is to reconstruct the interactions between subsystems on the basis of data. Finally, we consider the problem of assessing controllability of a dynamical system using its structure. We provide conditions under which this is possible for a general class of structured systems

    Control Of Nonh=holonomic Systems

    Get PDF
    Many real-world electrical and mechanical systems have velocity-dependent constraints in their dynamic models. For example, car-like robots, unmanned aerial vehicles, autonomous underwater vehicles and hopping robots, etc. Most of these systems can be transformed into a chained form, which is considered as a canonical form of these nonholonomic systems. Hence, study of chained systems ensure their wide applicability. This thesis studied the problem of continuous feed-back control of the chained systems while pursuing inverse optimality and exponential convergence rates, as well as the feed-back stabilization problem under input saturation constraints. These studies are based on global singularity-free state transformations and controls are synthesized from resulting linear systems. Then, the application of optimal motion planning and dynamic tracking control of nonholonomic autonomous underwater vehicles is considered. The obtained trajectories satisfy the boundary conditions and the vehicles\u27 kinematic model, hence it is smooth and feasible. A collision avoidance criteria is set up to handle the dynamic environments. The resulting controls are in closed forms and suitable for real-time implementations. Further, dynamic tracking controls are developed through the Lyapunov second method and back-stepping technique based on a NPS AUV II model. In what follows, the application of cooperative surveillance and formation control of a group of nonholonomic robots is investigated. A designing scheme is proposed to achieves a rigid formation along a circular trajectory or any arbitrary trajectories. The controllers are decentralized and are able to avoid internal and external collisions. Computer simulations are provided to verify the effectiveness of these designs

    Reinforcement Learning, Intelligent Control and their Applications in Connected and Autonomous Vehicles

    Get PDF
    Reinforcement learning (RL) has attracted large attention over the past few years. Recently, we developed a data-driven algorithm to solve predictive cruise control (PCC) and games output regulation problems. This work integrates our recent contributions to the application of RL in game theory, output regulation problems, robust control, small-gain theory and PCC. The algorithm was developed for HH_\infty adaptive optimal output regulation of uncertain linear systems, and uncertain partially linear systems to reject disturbance and also force the output of the systems to asymptotically track a reference. In the PCC problem, we determined the reference velocity for each autonomous vehicle in the platoon using the traffic information broadcasted from the lights to reduce the vehicles\u27 trip time. Then we employed the algorithm to design an approximate optimal controller for the vehicles. This controller is able to regulate the headway, velocity and acceleration of each vehicle to the desired values. Simulation results validate the effectiveness of the algorithms
    corecore