705 research outputs found

    Constrained LQR Using Online Decomposition Techniques

    Get PDF
    This paper presents an algorithm to solve the infinite horizon constrained linear quadratic regulator (CLQR) problem using operator splitting methods. First, the CLQR problem is reformulated as a (finite-time) model predictive control (MPC) problem without terminal constraints. Second, the MPC problem is decomposed into smaller subproblems of fixed dimension independent of the horizon length. Third, using the fast alternating minimization algorithm to solve the subproblems, the horizon length is estimated online, by adding or removing subproblems based on a periodic check on the state of the last subproblem to determine whether it belongs to a given control invariant set. We show that the estimated horizon length is bounded and that the control sequence computed using the proposed algorithm is an optimal solution of the CLQR problem. Compared to state-of-the-art algorithms proposed to solve the CLQR problem, our design solves at each iteration only unconstrained least-squares problems and simple gradient calculations. Furthermore, our technique allows the horizon length to decrease online (a useful feature if the initial guess on the horizon is too conservative). Numerical results on a planar system show the potential of our algorithm.Comment: This technical report is an extended version of the paper titled "Constrained LQR Using Online Decomposition Techniques" submitted to the 2016 Conference on Decision and Contro

    Real-Time Motion Planning of Legged Robots: A Model Predictive Control Approach

    Full text link
    We introduce a real-time, constrained, nonlinear Model Predictive Control for the motion planning of legged robots. The proposed approach uses a constrained optimal control algorithm known as SLQ. We improve the efficiency of this algorithm by introducing a multi-processing scheme for estimating value function in its backward pass. This pass has been often calculated as a single process. This parallel SLQ algorithm can optimize longer time horizons without proportional increase in its computation time. Thus, our MPC algorithm can generate optimized trajectories for the next few phases of the motion within only a few milliseconds. This outperforms the state of the art by at least one order of magnitude. The performance of the approach is validated on a quadruped robot for generating dynamic gaits such as trotting.Comment: 8 page

    Imprecise dynamic walking with time-projection control

    Get PDF
    We present a new walking foot-placement controller based on 3LP, a 3D model of bipedal walking that is composed of three pendulums to simulate falling, swing and torso dynamics. Taking advantage of linear equations and closed-form solutions of the 3LP model, our proposed controller projects intermediate states of the biped back to the beginning of the phase for which a discrete LQR controller is designed. After the projection, a proper control policy is generated by this LQR controller and used at the intermediate time. This control paradigm reacts to disturbances immediately and includes rules to account for swing dynamics and leg-retraction. We apply it to a simulated Atlas robot in position-control, always commanded to perform in-place walking. The stance hip joint in our robot keeps the torso upright to let the robot naturally fall, and the swing hip joint tracks the desired footstep location. Combined with simple Center of Pressure (CoP) damping rules in the low-level controller, our foot-placement enables the robot to recover from strong pushes and produce periodic walking gaits when subject to persistent sources of disturbance, externally or internally. These gaits are imprecise, i.e., emergent from asymmetry sources rather than precisely imposing a desired velocity to the robot. Also in extreme conditions, restricting linearity assumptions of the 3LP model are often violated, but the system remains robust in our simulations. An extensive analysis of closed-loop eigenvalues, viable regions and sensitivity to push timings further demonstrate the strengths of our simple controller

    A Parallel Dual Fast Gradient Method for MPC Applications

    Full text link
    We propose a parallel adaptive constraint-tightening approach to solve a linear model predictive control problem for discrete-time systems, based on inexact numerical optimization algorithms and operator splitting methods. The underlying algorithm first splits the original problem in as many independent subproblems as the length of the prediction horizon. Then, our algorithm computes a solution for these subproblems in parallel by exploiting auxiliary tightened subproblems in order to certify the control law in terms of suboptimality and recursive feasibility, along with closed-loop stability of the controlled system. Compared to prior approaches based on constraint tightening, our algorithm computes the tightening parameter for each subproblem to handle the propagation of errors introduced by the parallelization of the original problem. Our simulations show the computational benefits of the parallelization with positive impacts on performance and numerical conditioning when compared with a recent nonparallel adaptive tightening scheme.Comment: This technical report is an extended version of the paper "A Parallel Dual Fast Gradient Method for MPC Applications" by the same authors submitted to the 54th IEEE Conference on Decision and Contro

    Smooth Model Predictive Control with Applications to Statistical Learning

    Full text link
    Statistical learning theory and high dimensional statistics have had a tremendous impact on Machine Learning theory and have impacted a variety of domains including systems and control theory. Over the past few years we have witnessed a variety of applications of such theoretical tools to help answer questions such as: how many state-action pairs are needed to learn a static control policy to a given accuracy? Recent results have shown that continuously differentiable and stabilizing control policies can be well-approximated using neural networks with hard guarantees on performance, yet often even the simplest constrained control problems are not smooth. To address this void, in this paper we study smooth approximations of linear Model Predictive Control (MPC) policies, in which hard constraints are replaced by barrier functions, a.k.a. barrier MPC. In particular, we show that barrier MPC inherits the exponential stability properties of the original non-smooth MPC policy. Using a careful analysis of the proposed barrier MPC, we show that its smoothness constant can be carefully controlled, thereby paving the way for new sample complexity results for approximating MPC policies from sampled state-action pairs.Comment: 15 pages, 1 figur
    • …
    corecore