152 research outputs found

    Advances in the Theory of Fixed-time Stability with Applications in Constrained Control and Optimization

    Full text link
    Driving the state of dynamical systems to a desired point or set is a problem of crucial practical importance. Various constraints are present in real-world applications due to structural and operational requirements. Spatial constraints, i.e., constraints requiring the system trajectories to evolve in some textit{safe} set, while visiting some goal set(s), are typical in safety-critical applications. Furthermore, temporal constraints, i.e., constraints pertaining to the time of convergence, appear in time-critical applications, for instance, when a task must complete within a fixed time due to an internal or an external deadline. Moreover, imperfect knowledge of the operational environment and/or system dynamics, and the presence of external disturbances render offline control policies impractical and make it essential to develop methods for online control synthesis. Thus, from the implementation point-of-view, it is desired to design fast optimization algorithms so that an optimal control input, e.g., min-norm control input, can be computed online. As compared to exponential stability, the notion of fixed-time stability is stronger, with the time of convergence being finite and is bounded for all initial conditions. This dissertation studies the theory of fixed-time stability with applications in multi-agent control design under spatiotemporal and input constraints, and in the field of continuous-time optimization. First, multi-agent control design problems under spatiotemporal constraints are studied. A vector-field-based controller is presented for distributed control of multi-agent systems for a class of agents modeled under double-integrator dynamics. A finite-time controller that utilizes the state estimates obtained from a finite-time state observer is designed to guarantee that each agent reaches its goal location within a finite time while maintaining safety with respect to other agents as well as dynamic obstacles. Next, new conditions for fixed-time stability are developed to use fixed-time stability along with input constraints. It is shown that these new conditions capture the relationship between the time of convergence, the domain of attraction, and the input constraints for fixed-time stability. Additionally, the new conditions establish the robustness of fixed-time stable systems with respect to a class of vanishing and non-vanishing additive disturbances. Utilizing these new fixed-time stability results, a control design method using convex optimization is presented for a general class of systems having nonlinear, control-affine dynamics. Control barrier and control Lyapunov function conditions are used as linear constraints in the optimization problem for set-invariance and goal-reachability requirements. Various practical issues, such as input constraints, additive disturbance, and state-estimation error, are considered. Next, new results on finite-time stability for a class of hybrid and switched systems are proposed using a multiple-Lyapunov-functions framework. The presented framework allows the system to have unstable modes. Finally, novel continuous-time optimization methods are studied with guarantees for fixed-time convergence to an optimal point. Fixed-time stable gradient flows are developed for unconstrained convex optimization problems under conditions such as strict convexity and gradient dominance of the objective function, which is a relaxation of strong convexity. Furthermore, min-max problems are considered and modifications of saddle-point dynamics are proposed with fixed-time stability guarantees under various conditions on the objective function.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168071/1/kgarg_1.pd

    Towards a Theoretical Foundation of Policy Optimization for Learning Control Policies

    Full text link
    Gradient-based methods have been widely used for system design and optimization in diverse application domains. Recently, there has been a renewed interest in studying theoretical properties of these methods in the context of control and reinforcement learning. This article surveys some of the recent developments on policy optimization, a gradient-based iterative approach for feedback control synthesis, popularized by successes of reinforcement learning. We take an interdisciplinary perspective in our exposition that connects control theory, reinforcement learning, and large-scale optimization. We review a number of recently-developed theoretical results on the optimization landscape, global convergence, and sample complexity of gradient-based methods for various continuous control problems such as the linear quadratic regulator (LQR), H∞\mathcal{H}_\infty control, risk-sensitive control, linear quadratic Gaussian (LQG) control, and output feedback synthesis. In conjunction with these optimization results, we also discuss how direct policy optimization handles stability and robustness concerns in learning-based control, two main desiderata in control engineering. We conclude the survey by pointing out several challenges and opportunities at the intersection of learning and control.Comment: To Appear in Annual Review of Control, Robotics, and Autonomous System
    • …
    corecore