18 research outputs found

    A POSTERIORI BOUNDS OF APPROXIMATE SOLUTION TO VARIATIONAL AND QUASI-VARIATIONAL INEQUALITIES

    Get PDF
    Abstract In this paper we present some bounds of an approximate solution to variational and quasi-variational inequalities. The measures of errors can be used for construction of iterative and continuous procedures for solving variational (quasi-variational) inequalities and formulation of corresponding stopping rules. We will also present some methods based on linearization for solving quasi-variational inequalities

    Bounded perturbation resilience of projected scaled gradient methods

    Full text link
    We investigate projected scaled gradient (PSG) methods for convex minimization problems. These methods perform a descent step along a diagonally scaled gradient direction followed by a feasibility regaining step via orthogonal projection onto the constraint set. This constitutes a generalized algorithmic structure that encompasses as special cases the gradient projection method, the projected Newton method, the projected Landweber-type methods and the generalized Expectation-Maximization (EM)-type methods. We prove the convergence of the PSG methods in the presence of bounded perturbations. This resilience to bounded perturbations is relevant to the ability to apply the recently developed superiorization methodology to PSG methods, in particular to the EM algorithm.Comment: Computational Optimization and Applications, accepted for publicatio

    On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems

    Full text link
    In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained separable convex problems and analyze its rate of convergence. In particular, we prove that under the assumption of strong convexity and Lipshitz continuity of the gradient of the primal objective function we have a global error bound type property for the dual problem. Using this error bound property we devise a fully distributed dual gradient scheme, i.e. a gradient scheme based on a weighted step size, for which we derive global linear rate of convergence for both dual and primal suboptimality and for primal feasibility violation. Many real applications, e.g. distributed model predictive control, network utility maximization or optimal power flow, can be posed as linearly constrained separable convex problems for which dual gradient type methods from literature have sublinear convergence rate. In the present paper we prove for the first time that in fact we can achieve linear convergence rate for such algorithms when they are used for solving these applications. Numerical simulations are also provided to confirm our theory.Comment: 14 pages, 4 figures, submitted to Automatica Journal, February 2014. arXiv admin note: substantial text overlap with arXiv:1401.4398. We revised the paper, adding more simulations and checking for typo

    Global Error Bound Estimation for the Generalized Nonlinear Complementarity Problem over a Closed Convex Cone

    Get PDF
    The global error bound estimation for the generalized nonlinear complementarity problem over a closed convex cone (GNCP) is considered. To obtain a global error bound for the GNCP, we first develop an equivalent reformulation of the problem. Based on this, a global error bound for the GNCP is established. The results obtained in this paper can be taken as an extension of previously known results

    On the rate of convergence of a partially asynchronous gradient projection algorithm

    Get PDF
    Cover title.Includes bibliographical references (leaves 20-21).Research supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research supported by the National Science Foundation. NSF-DDM-8903385by Paul Tseng

    An Improvement of Global Error Bound for the Generalized Nonlinear Complementarity Problem over a Polyhedral Cone

    Get PDF
    We consider the global error bound for the generalized nonlinear complementarity problem over a polyhedral cone (GNCP). By a new technique, we establish an easier computed global error bound for the GNCP under weaker conditions, which improves the result obtained by for GNCP
    corecore