2,168 research outputs found

    A parallel computation approach for solving multistage stochastic network problems

    Get PDF
    The original publication is available at www.springerlink.comThis paper presents a parallel computation approach for the efficient solution of very large multistage linear and nonlinear network problems with random parameters. These problems result from particular instances of models for the robust optimization of network problems with uncertainty in the values of the right-hand side and the objective function coefficients. The methodology considered here models the uncertainty using scenarios to characterize the random parameters. A scenario tree is generated and, through the use of full-recourse techniques, an implementable solution is obtained for each group of scenarios at each stage along the planning horizon. As a consequence of the size of the resulting problems, and the special structure of their constraints, these models are particularly well-suited for the application of decomposition techniques, and the solution of the corresponding subproblems in a parallel computation environment. An augmented Lagrangian decomposition algorithm has been implemented on a distributed computation environment, and a static load balancing approach has been chosen for the parallelization scheme, given the subproblem structure of the model. Large problems – 9000 scenarios and 14 stages with a deterministic equivalent nonlinear model having 166000 constraints and 230000 variables – are solved in 45 minutes on a cluster of four small (11 Mflops) workstations. An extensive set of computational experiments is reported; the numerical results and running times obtained for our test set, composed of large-scale real-life problems, confirm the efficiency of this procedure.Publicad

    A parallel computation approach for solving multistage stochastic network problems

    Get PDF
    This paper presents a parallel computation approach for the efficient solution of very large multistage linear and nonIinear network problems with random parameters. These problems resul t from particular instances of models for the robust optimization of network problems with uncertainty in the values of the right-hand side and the objective function coefficients. The methodology considered here models the uncertainty using scenarios to characterize the random parameters. A. scenario tree is generated and, through the use of full-recourse techniques, an implementable solution is obtained for each group of scenarios at each stage along the planning horizon. As a consequence of the size of the resulting problems, and the special structure of their constraints, these models are particularly well-suited for the application of decomposition techniques, and the solution of the corresponding subproblems in a parallel computation environment. An Augmented Lagrangian decomposition algorithm has been implemented on a distributed computation environment, and a static load balancing approach has been chosen for the parallelization scheme. given the subproblem structure of the model. Large problems -9000 scenarios and 14 stages with a deterministic equivalent nonlinear model having 166000 constraints and 230000 variables- are solved in 15 minutes on a cluster of 4 small (16 Mflops) workstations. An extensive set of computational experiments is reported; the numerical results and running times obtained for our test set, composed of large-scale real-life problems, confirm the efficiency of this procedure

    Reformulation and decomposition of integer programs

    Get PDF
    In this survey we examine ways to reformulate integer and mixed integer programs. Typically, but not exclusively, one reformulates so as to obtain stronger linear programming relaxations, and hence better bounds for use in a branch-and-bound based algorithm. First we cover in detail reformulations based on decomposition, such as Lagrangean relaxation, Dantzig-Wolfe column generation and the resulting branch-and-price algorithms. This is followed by an examination of Benders’ type algorithms based on projection. Finally we discuss in detail extended formulations involving additional variables that are based on problem structure. These can often be used to provide strengthened a priori formulations. Reformulations obtained by adding cutting planes in the original variables are not treated here.Integer program, Lagrangean relaxation, column generation, branch-and-price, extended formulation, Benders' algorithm

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Regularized Decomposition of Stochastic Programs: Algorithmic Techniques and Numerical Results

    Get PDF
    A finitely convergent non-simplex method for large scale structured linear programming problems arising in stochastic programming is presented. The method combines the ideas of the Dantzig-Wolfe decomposition principle and modern nonsmooth optimization methods. Algorithmic techniques taking advantage of properties of stochastic programs are described and numerical results for large real world problems reported

    Efficient solution of two-stage stochastic linear programs using interior point methods

    Full text link
    Solving deterministic equivalent formulations of two-stage stochastic linear programs using interior point methods may be computationally difficult due to the need to factorize quite dense search direction matrices (e.g., AA T ). Several methods for improving the algorithmic efficiency of interior point algorithms by reducing the density of these matrices have been proposed in the literature. Reformulating the program decreases the effort required to find a search direction, but at the expense of increased problem size. Using transpose product formulations (e.g., A T A ) works well but is highly problem dependent. Schur complements may require solutions with potentially near singular matrices. Explicit factorizations of the search direction matrices eliminate these problems while only requiring the solution to several small, independent linear systems. These systems may be distributed across multiple processors. Computational experience with these methods suggests that substantial performance improvements are possible with each method and that, generally, explicit factorizations require the least computational effort.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44758/1/10589_2004_Article_BF00249637.pd

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness
    corecore