1,882 research outputs found

    On the convergence of the block nonlinear Gauss-Seidel method under convex constraints.

    Get PDF
    We give new convergence results for the block Gauss–Seidel method for problems where the feasible set is the Cartesian product of m closed convex sets, under the assumption that the sequence generated by the method has limit points. We show that the method is globally convergent for m=2 and that for m>2 convergence can be established both when the objective function f is componentwise strictly quasiconvex with respect to m−2 components and when f is pseudoconvex. Finally, we consider a proximal point modification of the method and we state convergence results without any convexity assumption on the objective functio

    On convergence of the maximum block improvement method

    Get PDF
    Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle block-structured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes second-order assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rank-one tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method

    On the relationship between bilevel decomposition algorithms and direct interior-point methods

    Get PDF
    Engineers have been using bilevel decomposition algorithms to solve certain nonconvex large-scale optimization problems arising in engineering design projects. These algorithms transform the large-scale problem into a bilevel program with one upperlevel problem (the master problem) and several lower-level problems (the subproblems). Unfortunately, there is analytical and numerical evidence that some of these commonly used bilevel decomposition algorithms may fail to converge even when the starting point is very close to the minimizer. In this paper, we establish a relationship between a particular bilevel decomposition algorithm, which only performs one iteration of an interior-point method when solving the subproblems, and a direct interior-point method, which solves the problem in its original (integrated) form. Using this relationship, we formally prove that the bilevel decomposition algorithm converges locally at a superlinear rate. The relevance of our analysis is that it bridges the gap between the incipient local convergence theory of bilevel decomposition algorithms and the mature theory of direct interior-point methods

    Parallel Selective Algorithms for Big Data Optimization

    Full text link
    We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e., sequential) ones, as well as virtually all possibilities "in between" with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.Comment: This work is an extended version of the conference paper that has been presented at IEEE ICASSP'14. The first and the second author contributed equally to the paper. This revised version contains new numerical results on non convex quadratic problem
    corecore