16,050 research outputs found

    Convergence analysis of a variable metric forward-backward splitting algorithm with applications

    Full text link
    The forward-backward splitting algorithm is a popular operator-splitting method for solving monotone inclusion of the sum of a maximal monotone operator and a cocoercive operator. In this paper, we present a new convergence analysis of a variable metric forward-backward splitting algorithm with extended relaxation parameters in real Hilbert spaces. We prove that this algorithm is weakly convergent when certain weak conditions are imposed upon the relaxation parameters. Consequently, we recover the forward-backward splitting algorithm with variable step sizes. As an application, we obtain a variable metric forward-backward splitting algorithm for solving the minimization problem of the sum of two convex functions, where one of them is differentiable with a Lipschitz continuous gradient. Furthermore, we discuss the applications of this algorithm to the fundamental of the variational inequalities problem, constrained convex minimization problem, and split feasibility problem. Numerical experimental results on LASSO problem in statistical learning demonstrate the effectiveness of the proposed iterative algorithm.Comment: 27 pages, 2 figure

    A telescoping Bregmanian proximal gradient method without the global Lipschitz continuity assumption

    Full text link
    The problem of minimization of the sum of two convex functions has various theoretical and real-world applications. One of the popular methods for solving this problem is the proximal gradient method (proximal forward-backward algorithm). A very common assumption in the use of this method is that the gradient of the smooth term is globally Lipschitz continuous. However, this assumption is not always satisfied in practice, thus casting a limitation on the method. In this paper, we discuss, in a wide class of finite and infinite-dimensional spaces, a new variant of the proximal gradient method which does not impose the above-mentioned global Lipschitz continuity assumption. A key contribution of the method is the dependence of the iterative steps on a certain telescopic decomposition of the constraint set into subsets. Moreover, we use a Bregman divergence in the proximal forward-backward operation. Under certain practical conditions, a non-asymptotic rate of convergence (that is, in the function values) is established, as well as the weak convergence of the whole sequence to a minimizer. We also obtain a few auxiliary results of independent interest.Comment: Journal of Optimization Theory and Applications (JOTA): accepted for publication; very minor modifications; this version contains full proofs and alphabetically ordered list of references (in contrast with the journal version

    Stochastic model-based minimization of weakly convex functions

    Full text link
    We consider a family of algorithms that successively sample and minimize simple stochastic models of the objective function. We show that under reasonable conditions on approximation quality and regularity of the models, any such algorithm drives a natural stationarity measure to zero at the rate O(k−1/4)O(k^{-1/4}). As a consequence, we obtain the first complexity guarantees for the stochastic proximal point, proximal subgradient, and regularized Gauss-Newton methods for minimizing compositions of convex functions with smooth maps. The guiding principle, underlying the complexity guarantees, is that all algorithms under consideration can be interpreted as approximate descent methods on an implicit smoothing of the problem, given by the Moreau envelope. Specializing to classical circumstances, we obtain the long-sought convergence rate of the stochastic projected gradient method, without batching, for minimizing a smooth function on a closed convex set.Comment: 33 pages, 4 figure

    A Scalarization Proximal Point Method for Quasiconvex Multiobjective Minimization

    Full text link
    In this paper we propose a scalarization proximal point method to solve multiobjective unconstrained minimization problems with locally Lipschitz and quasiconvex vector functions. We prove, under natural assumptions, that the sequence generated by the method is well defined and converges globally to a Pareto-Clarke critical point. Our method may be seen as an extension, for the non convex case, of the inexact proximal method for multiobjective convex minimization problems studied by Bonnel et al. (SIAM Journal on Optimization 15, 4, 953-970, 2005).Comment: Several applications in diverse Science and Engineering areas are motivation to work with nonconvex multiobjective functions and proximal point methods. In particular the class of quasiconvex minimization problems has been receiving attention from many researches due to the broad range of applications in location theory, control theory and specially in economic theor

    Proximal linearized iteratively reweighted least squares for a class of nonconvex and nonsmooth problems

    Full text link
    For solving a wide class of nonconvex and nonsmooth problems, we propose a proximal linearized iteratively reweighted least squares (PL-IRLS) algorithm. We first approximate the original problem by smoothing methods, and second write the approximated problem into an auxiliary problem by introducing new variables. PL-IRLS is then built on solving the auxiliary problem by utilizing the proximal linearization technique and the iteratively reweighted least squares (IRLS) method, and has remarkable computation advantages. We show that PL-IRLS can be extended to solve more general nonconvex and nonsmooth problems via adjusting generalized parameters, and also to solve nonconvex and nonsmooth problems with two or more blocks of variables. Theoretically, with the help of the Kurdyka- Lojasiewicz property, we prove that each bounded sequence generated by PL-IRLS globally converges to a critical point of the approximated problem. To the best of our knowledge, this is the first global convergence result of applying IRLS idea to solve nonconvex and nonsmooth problems. At last, we apply PL-IRLS to solve three representative nonconvex and nonsmooth problems in sparse signal recovery and low-rank matrix recovery and obtain new globally convergent algorithms.Comment: 23 page

    Efficiency of minimizing compositions of convex functions and smooth maps

    Full text link
    We consider global efficiency of algorithms for minimizing a sum of a convex function and a composition of a Lipschitz convex function with a smooth map. The basic algorithm we rely on is the prox-linear method, which in each iteration solves a regularized subproblem formed by linearizing the smooth map. When the subproblems are solved exactly, the method has efficiency O(ε−2)\mathcal{O}(\varepsilon^{-2}), akin to gradient descent for smooth minimization. We show that when the subproblems can only be solved by first-order methods, a simple combination of smoothing, the prox-linear method, and a fast-gradient scheme yields an algorithm with complexity O~(ε−3)\widetilde{\mathcal{O}}(\varepsilon^{-3}). The technique readily extends to minimizing an average of mm composite functions, with complexity O~(m/ε2+m/ε3)\widetilde{\mathcal{O}}(m/\varepsilon^{2}+\sqrt{m}/\varepsilon^{3}) in expectation. We round off the paper with an inertial prox-linear method that automatically accelerates in presence of convexity

    Numerical Optimization of Eigenvalues of Hermitian Matrix Functions

    Full text link
    This work concerns the global minimization of a prescribed eigenvalue or a weighted sum of prescribed eigenvalues of a Hermitian matrix-valued function depending on its parameters analytically in a box. We describe how the analytical properties of eigenvalue functions can be put into use to derive piece-wise quadratic functions that underestimate the eigenvalue functions. These piece-wise quadratic under-estimators lead us to a global minimization algorithm, originally due to Breiman and Cutler. We prove the global convergence of the algorithm, and show that it can be effectively used for the minimization of extreme eigenvalues, e.g., the largest eigenvalue or the sum of the largest specified number of eigenvalues. This is particularly facilitated by the analytical formulas for the first derivatives of eigenvalues, as well as analytical lower bounds on the second derivatives that can be deduced for extreme eigenvalue functions. The applications that we have in mind also include the H∞{\rm H}_\infty-norm of a linear dynamical system, numerical radius, distance to uncontrollability and various other non-convex eigenvalue optimization problems, for which, generically, the eigenvalue function involved is simple at all points.Comment: 25 pages, 3 figure

    Composite convex minimization involving self-concordant-like cost functions

    Full text link
    The self-concordant-like property of a smooth convex function is a new analytical structure that generalizes the self-concordant notion. While a wide variety of important applications feature the self-concordant-like property, this concept has heretofore remained unexploited in convex optimization. To this end, we develop a variable metric framework of minimizing the sum of a "simple" convex function and a self-concordant-like function. We introduce a new analytic step-size selection procedure and prove that the basic gradient algorithm has improved convergence guarantees as compared to "fast" algorithms that rely on the Lipschitz gradient property. Our numerical tests with real-data sets shows that the practice indeed follows the theory.Comment: 19 pages, 5 figure

    Sparse Output Feedback Synthesis via Proximal Alternating Linearization Method

    Full text link
    We consider the co-design problem of sparse output feedback and row/column-sparse output matrix. A row-sparse (resp. column-sparse) output matrix implies a small number of outputs (resp. sensor measurements). We impose row/column-cardinality constraint on the output matrix and the cardinality constraint on the output feedback gain. The resulting nonconvex, nonsmooth optimal control problem is solved by using the proximal alternating linearization method (PALM). One advantage of PALM is that the proximal operators for sparsity constraints admit closed-form expressions and are easy to implement. Furthermore, the bilinear matrix function introduced by the multiplication of the feedback gain and the output matrix lends itself well to PALM. By establishing the Lipschitz conditions of the bilinear function, we show that PALM is globally convergent and the objective value is monotonically decreasing throughout the algorithm. Numerical experiments verify the convergence results and demonstrate the effectiveness of our approach on an unstable system with 60,000 design variables

    The proximal point method revisited

    Full text link
    In this short survey, I revisit the role of the proximal point method in large scale optimization. I focus on three recent examples: a proximally guided subgradient method for weakly convex stochastic approximation, the prox-linear algorithm for minimizing compositions of convex functions and smooth maps, and Catalyst generic acceleration for regularized Empirical Risk Minimization.Comment: 11 pages, submitted to SIAG/OPT Views and New
    • …
    corecore