974 research outputs found

    Nonlinear Programming Techniques Applied to Stochastic Programs with Recourse

    Get PDF
    Stochastic convex programs with recourse can equivalently be formulated as nonlinear convex programming problems. These possess some rather marked characteristics. Firstly, the proportion of linear to nonlinear variables is often large and leads to a natural partition of the constraints and objective. Secondly, the objective function corresponding to the nonlinear variables can vary over a wide range of possibilities; under appropriate assumptions about the underlying stochastic program it could be, for example, a smooth function, a separable polyhedral function or a nonsmooth function whose values and gradients are very expensive to compute. Thirdly, the problems are often large-scale and linearly constrained with special structure in the constraints. This paper is a comprehensive study of solution methods for stochastic programs with recourse viewed from the above standpoint. We describe a number of promising algorithmic approaches that are derived from methods of nonlinear programming. The discussion is a fairly general one, but the solution of two classes of stochastic programs with recourse are of particular interest. The first corresponds to stochastic linear programs with simple recourse and stochastic right-hand-side elements with given discrete probability distribution. The second corresponds to stochastic linear programs with complete recourse and stochastic right-hand-side vectors defined by a limited number of scenarios, each with given probability. A repeated theme is the use of the MINOS code of Murtagh and Saunders as a basis for developing suitable implementations

    Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging

    Full text link
    In this paper, we study distributed big-data nonconvex optimization in multi-agent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is in big-data problems wherein there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable, due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method whereby at each iteration agents optimize and then communicate (in an uncoordinated fashion) only a subset of their decision variables. To deal with non-convexity of the cost function, the novel scheme hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate gradient averages; and ii) a novel block-wise consensus-based protocol to perform local block-averaging operations and gradient tacking. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed

    Distributed Nonconvex Multiagent Optimization Over Time-Varying Networks

    Full text link
    We study nonconvex distributed optimization in multiagent networks where the communications between nodes is modeled as a time-varying sequence of arbitrary digraphs. We introduce a novel broadcast-based distributed algorithmic framework for the (constrained) minimization of the sum of a smooth (possibly nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually employed to enforce some structure in the solution, typically sparsity. The proposed method hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate the gradients of agents' cost functions; and ii) a novel broadcast protocol to disseminate information and distribute the computation among the agents. Asymptotic convergence to stationary solutions is established. A key feature of the proposed algorithm is that it neither requires the double-stochasticity of the consensus matrices (but only column stochasticity) nor the knowledge of the graph sequence to implement. To the best of our knowledge, the proposed framework is the first broadcast-based distributed algorithm for convex and nonconvex constrained optimization over arbitrary, time-varying digraphs. Numerical results show that our algorithm outperforms current schemes on both convex and nonconvex problems.Comment: Copyright 2001 SS&C. Published in the Proceedings of the 50th annual Asilomar conference on signals, systems, and computers, Nov. 6-9, 2016, CA, US

    PRISMA: PRoximal Iterative SMoothing Algorithm

    Full text link
    Motivated by learning problems including max-norm regularized matrix completion and clustering, robust PCA and sparse inverse covariance selection, we propose a novel optimization algorithm for minimizing a convex objective which decomposes into three parts: a smooth part, a simple non-smooth Lipschitz part, and a simple non-smooth non-Lipschitz part. We use a time variant smoothing strategy that allows us to obtain a guarantee that does not depend on knowing in advance the total number of iterations nor a bound on the domain
    corecore