17 research outputs found

    Optimal Convergence Rates for Generalized Alternating Projections

    Full text link
    Generalized alternating projections is an algorithm that alternates relaxed projections onto a finite number of sets to find a point in their intersection. We consider the special case of two linear subspaces, for which the algorithm reduces to a matrix teration. For convergent matrix iterations, the asymptotic rate is linear and decided by the magnitude of the subdominant eigenvalue. In this paper, we show how to select the three algorithm parameters to optimize this magnitude, and hence the asymptotic convergence rate. The obtained rate depends on the Friedrichs angle between the subspaces and is considerably better than known rates for other methods such as alternating projections and Douglas-Rachford splitting. We also present an adaptive scheme that, online, estimates the Friedrichs angle and updates the algorithm parameters based on this estimate. A numerical example is provided that supports our theoretical claims and shows very good performance for the adaptive method.Comment: 20 pages, extended version of article submitted to CD

    Convergence Analysis and Improvements for Projection Algorithms and Splitting Methods

    Get PDF
    Non-smooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These first-order optimization algorithms are often simple, well suited to solve large-scale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the per-iteration cost is usually low, the number of iterations can become prohibitively large for ill-conditioned problems, especially if a high accuracy solution is sought.In this thesis, a few methods for alleviating this slow convergence are studied, which can be divided into two main approaches. The first are heuristic methods that can be applied to a range of fixed-point algorithms. They are based on understanding typical behavior of these algorithms. While these methods are shown to converge, they come with no guarantees on improved convergence rates.The other approach studies the theoretical rates of a class of projection methods that are used to solve convex feasibility problems. These are problems where the goal is to find a point in the intersection of two, or possibly more, convex sets. A study of how the parameters in the algorithm affect the theoretical convergence rate is presented, as well as how they can be chosen to optimize this rate

    Using ADMM for Hybrid System MPC

    Get PDF
    Model Predictive control (MPC) has been studied extensively because of its ability to handle constraints and its great properties in terms of stability and performance [Mayne et al., 2000]. We have in this thesis focused on MPC of Hybrid Systems, i.e. systems with both continuous and discrete dynamics. More specifically, we look at problems that can be cast as Mixed Integer Quadratic Programming (MIQP) problems which we are solving using a Branch and Bound technique. The problem is in this way reduced to solving a large number of constrained quadratic problems. However, the use in real time systems puts a requirement on the speed and efficiency of the optimization methods used. Because of its low computational cost, there have recently been a rising interest in the Alternating Direction Method of Multiplies (ADMM) for solving constrained optimization problems. We are in this thesis looking at how the different properties of ADMM can be used and improved for these problems, as well as how the Branch and Bound solver can be tailored to accompany ADMM. We have two main contributions to ADMM that mitigate some of the downsides with the often ill-conditioned problems that arise from Hybrid Systems. Firstly, a technique for greatly improving the conditioning of the problems, and secondly, a method to perform fast line search within the solver. We show that these methods are very efficient and can be used to solve problems that are otherwise hard or impossible to precondition properly

    QPDAS: Dual Active Set Solver for Mixed Constraint Quadratic Programming

    Full text link
    We present a method for solving the general mixed constrained convex quadratic programming problem using an active set method on the dual problem. The approach is similar to existing active set methods, but we present a new way of solving the linear systems arising in the algorithm. There are two main contributions; we present a new way of factorizing the linear systems, and show how iterative refinement can be used to achieve good accuracy and to solve both types of sub-problems that arise from semi-definite problems

    Online Horizon Selection in Receding Horizon Temporal Logic Planning

    Get PDF
    Temporal logics have proven effective for correct-by-construction synthesis of controllers for a wide range of applications. Receding horizon frameworks mitigate the computational intractability of reactive synthesis for temporal logic, but have thus far been limited by pursuing a single sequence of short horizon problems to the current goal. We propose a receding horizon algorithm for reactive synthesis that automatically determines a path to the currently pursued goal at runtime, in response to a nondeterministic environment. This is achieved by allowing each short horizon to have multiple local goals, and determining which local goal to pursue based on the current global goal, currently perceived environment and a pre-computed invariant dependent on each global goal. We demonstrate the utility of this additional flexibility in grant-response tasks, using a search-and-rescue example. Moreover, we show that these goal-dependent invariants mitigate the conservativeness of the receding horizon approach

    Generalized Alternating Projections on Manifolds and Convex Sets

    No full text
    In this paper, we extend the previous convergence results for the generalizedalternating projection method applied to subspaces in [arXiv:1703.10547] tohold also for smooth manifolds. We show that the algorithm locally behavessimilarly in the subspace and manifold settings and that the same rates areobtained. We also present convergence rate results for when the algorithm isapplied to non-empty, closed, and convex sets. The results are based on afinite identification property that implies that the algorithm after an initialidentification phase solves a smooth manifold feasibility problem. Therefore,the rates in this paper hold asymptotically for problems in which thisidentification property is satisfied. We present a few examples where this isthe case and also a counter example for when this is not

    Envelope Functions : Unifications and Further Properties

    No full text
    Forward–backward and Douglas–Rachford splitting are methods for structured nonsmooth optimization. With the aim to use smooth optimization techniques for nonsmooth problems, the forward–backward and Douglas–Rachford envelopes where recently proposed. Under specific problem assumptions, these envelope functions have favorable smoothness and convexity properties and their stationary points coincide with the fixed-points of the underlying algorithm operators. This allows for solving such nonsmooth optimization problems by minimizing the corresponding smooth convex envelope function. In this paper, we present a general envelope function that unifies and generalizes existing ones. We provide properties of the general envelope function that sharpen corresponding known results for the special cases. We also present a new interpretation of the underlying methods as being majorization–minimization algorithms applied to their respective envelope functions

    Envelope Functions : Unifications and Further Properties

    No full text
    Forward–backward and Douglas–Rachford splitting are methods for structured nonsmooth optimization. With the aim to use smooth optimization techniques for nonsmooth problems, the forward–backward and Douglas–Rachford envelopes where recently proposed. Under specific problem assumptions, these envelope functions have favorable smoothness and convexity properties and their stationary points coincide with the fixed-points of the underlying algorithm operators. This allows for solving such nonsmooth optimization problems by minimizing the corresponding smooth convex envelope function. In this paper, we present a general envelope function that unifies and generalizes existing ones. We provide properties of the general envelope function that sharpen corresponding known results for the special cases. We also present a new interpretation of the underlying methods as being majorization–minimization algorithms applied to their respective envelope functions
    corecore