9 research outputs found

    An Asymmetric Proximal Decomposition Method for Convex Programming with Linearly Coupling Constraints

    Get PDF
    The problems studied are the separable variational inequalities with linearly coupling constraints. Some existing decomposition methods are very problem specific, and the computation load is quite costly. Combining the ideas of proximal point algorithm (PPA) and augmented Lagrangian method (ALM), we propose an asymmetric proximal decomposition method (AsPDM) to solve a wide variety separable problems. By adding an auxiliary quadratic term to the general Lagrangian function, our method can take advantage of the separable feature. We also present an inexact version of AsPDM to reduce the computation load of each iteration. In the computation process, the inexact version only uses the function values. Moreover, the inexact criterion and the step size can be implemented in parallel. The convergence of the proposed method is proved, and numerical experiments are employed to show the advantage of AsPDM

    A parallelizable augmented Lagrangian method applied to large-scale non-convex-constrained optimization problems

    Get PDF
    We contribute improvements to a Lagrangian dual solution approach applied to large-scale optimization problems whose objective functions are convex, continuously differentiable and possibly nonlinear, while the non-relaxed constraint set is compact but not necessarily convex. Such problems arise, for example, in the split-variable deterministic reformulation of stochastic mixed-integer optimization problems. We adapt the augmented Lagrangian method framework to address the presence of nonconvexity in the non-relaxed constraint set and to enable efficient parallelization. The development of our approach is most naturally compared with the development of proximal bundle methods and especially with their use of serious step conditions. However, deviations from these developments allow for an improvement in efficiency with which parallelization can be utilized. Pivotal in our modification to the augmented Lagrangian method is an integration of the simplicial decomposition method and the nonlinear block Gauss-Seidel method. An adaptation of a serious step condition associated with proximal bundle methods allows for the approximation tolerance to be automatically adjusted. Under mild conditions optimal dual convergence is proven, and we report computational results on test instances from the stochastic optimization literature. We demonstrate improvement in parallel speedup over a baseline parallel approach

    Proximal Decomposition on the Graph of a Maximal Monotone Operator

    Get PDF
    International audienceWe present an algorithm to solve: Find (x,y)∈A×A⊥(x, y) \in A\times A^\bot such that y∈Txy\in Tx, where AA is a subspace and TT is a maximal monotone operator. The algorithm is based on the proximal decomposition on the graph of a monotone operator and we show how to recover Spingarn's decomposition method. We give a proof of convergence that does not use the concept of partial inverse and show how to choose a scaling factor to accelerate the convergence in the strongly monotone case. Numerical results performed on quadratic problems confirm the robust behaviour of the algorithm

    Decomposition and duality based approaches to stochastic integer programming

    Get PDF
    Stochastic Integer Programming is a variant of Linear Programming which incorporates integer and stochastic properties (i.e. some variables are discrete, and some properties of the problem are randomly determined after the first-stage decision). A Stochastic Integer Program may be rewritten as an equivalent Integer Program with a characteristic structure, but is often too large to effectively solve directly. In this thesis we develop new algorithms which exploit convex duality and scenario-wise decomposition of the equivalent Integer Program to find better dual bounds and faster optimal solutions. A major attraction of this approach is that these algorithms will be amenable to parallel computation

    On the Spingarn's partial inverse method: inexact versions, convergence rates and applications to operator splitting and optimization

    No full text
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro de Ciências Físicas e Matemáticas, Programa de Pós-Graduação em Matemática Pura e Aplicada, Florianópolis, 2018.Neste trabalho, propomos e estudamos a complexidade computacional (em número de iterações) de uma versão inexata do método das inversas parciais de Spingarn. Os principais resultados de complexidade são obtidos através de uma análise do método proposto no contexto do hybrid proximal extragradient (HPE) method de Solodov e Svaiter, para o qual resultados decomplexidade pontual e ergódica foram obtidos recentemente por Monteiro e Svaiter. Como aplicações, propomos e analisamos a complexidade computacional de um algoritmo inexato de decomposição -- que generaliza o algoritmo de decomposição de Spingarn -- e de um algoritmo paralelo do tipo forward-backward para otimização convexa com múltiplos termos na função objetivo. Além disso, mostramos que o algoritmo scaled proximal decomposition on the graph of a maximal monotone operator (SPDG), originalmente introduzido e estudado por Mahey, Oualibouch e Tao (1995), pode ser analisado através do formalismo das inversas parciais de Spingarn. Mais precisamente, mostramos que sob as hipóteses consideradas por Mahey, Oualibouch and Tao, a inversa parcial de Spingarn (do operador monótono maximal que define o problema em consideração) é um operador fortemente monótono, o que permite empregar resultados recentes sobre convergência e complexidade computational de métodos proximais para operadores fortemente monótonos. Ao fazer isso, obtemos adicionalmente uma convergência potencialmente mais rápida para o algorítmo SPDG e um limite superior mais preciso sobre o número de iterações necessárias para alcançar tolerâncias prescritas, especialmente para problemas mal-condicionados.Abstract : In this work, we propose and study the iteration-complexity of an inexact version of the Spingarn's partial inverse method. Its complexity analysis is performed by viewing it in the framework of the hybrid proximal extragradient (HPE) method, for which pointwise and ergodic iteration-complexity has been established recently by Monteiro and Svaiter. As applications, we propose and analyze the iteration-complexity of an inexact operator splitting algorithm -- which generalizes the original Spingarn's splitting method -- and of a parallel forward-backward algorithm for multi-term composite convex optimization. We also show that the scaled proximal decomposition on the graph of a maximal monotone operator (SPDG) algorithm introduced and analyzed by Mahey, Oualibouch and Tao (1955) can be analyzed within the original Spingarn's partial inverse framework. %The SPDG algorithm generalizes the Spingarn's partial inverse method by allowing scaling factors, a key strategy for speeding up the convergence of numerical algorithms. We show that under the assumptions considered by Mahey, Oualibouch and Tao, the Spingarn's partial inverse of the underlying maximal monotone operator is strongly monotone, which allows one to employ recent results on the convergence and iteration-complexity of proximal point type methods for strongly monotone operators. By doing this, we additionally obtain a potentially faster convergence for the SPDG algorithm and a more accurate upper bound on the number of iterations needed to achieve prescribed tolerances, specially for ill-conditioned problems
    corecore