80 research outputs found

    An Extragradient-Based Alternating Direction Method for Convex Minimization

    Get PDF
    In this paper, we consider the problem of minimizing the sum of two convex functions subject to linear linking constraints. The classical alternating direction type methods usually assume that the two convex functions have relatively easy proximal mappings. However, many problems arising from statistics, image processing and other fields have the structure that while one of the two functions has easy proximal mapping, the other function is smoothly convex but does not have an easy proximal mapping. Therefore, the classical alternating direction methods cannot be applied. To deal with the difficulty, we propose in this paper an alternating direction method based on extragradients. Under the assumption that the smooth function has a Lipschitz continuous gradient, we prove that the proposed method returns an ϵ\epsilon-optimal solution within O(1/ϵ)O(1/\epsilon) iterations. We apply the proposed method to solve a new statistical model called fused logistic regression. Our numerical experiments show that the proposed method performs very well when solving the test problems. We also test the performance of the proposed method through solving the lasso problem arising from statistics and compare the result with several existing efficient solvers for this problem; the results are very encouraging indeed

    Stochastic Approximation for Estimating the Price of Stability in Stochastic Nash Games

    Full text link
    The goal in this paper is to approximate the Price of Stability (PoS) in stochastic Nash games using stochastic approximation (SA) schemes. PoS is amongst the most popular metrics in game theory and provides an avenue for estimating the efficiency of Nash games. In particular, knowing the value of PoS can help with designing efficient networked systems, including transportation networks and power market mechanisms. Motivated by the lack of efficient methods for computing the PoS, first we consider stochastic optimization problems with a nonsmooth and merely convex objective function and a merely monotone stochastic variational inequality (SVI) constraint. This problem appears in the numerator of the PoS ratio. We develop a randomized block-coordinate stochastic extra-(sub)gradient method where we employ a novel iterative penalization scheme to account for the mapping of the SVI in each of the two gradient updates of the algorithm. We obtain an iteration complexity of the order ϵ4\epsilon^{-4} that appears to be best known result for this class of constrained stochastic optimization problems, where ϵ\epsilon denotes an arbitrary bound on suitably defined infeasibility and suboptimality metrics. Second, we develop an SA-based scheme for approximating the PoS and derive lower and upper bounds on the approximation error. To validate the theoretical findings, we provide preliminary simulation results on a networked stochastic Nash Cournot competition
    corecore