22 research outputs found

    Generalized Bundle Methods

    Get PDF
    We study a class of generalized bundle methods for which the stabilizing term can be any closed convex function satisfying certain properties. This setting covers several algorithms from the literature that have been so far regarded as distinct. Under a different hypothesis on the stabilizing term and/or the function to be minimized, we prove finite termination, asymptotic convergence, and finite convergence to an optimal point, with or without limits on the number of serious steps and/or requiring the proximal parameter to go to infinity. The convergence proofs leave a high degree of freedom in the crucial implementative features of the algorithm, i.e., the management of the bundle of subgradients (ÎČ-strategy) and of the proximal parameter (t-strategy). We extensively exploit a dual view of bundle methods, which are shown to be a dual ascent approach to one nonlinear problem in an appropriate dual space, where nonlinear subproblems are approximately solved at each step with an inner linearization approach. This allows us to precisely characterize the changes in the subproblems during the serious steps, since the dual problem is not tied to the local concept of Δ-subdifferential. For some of the proofs, a generalization of inf-compactness, called *-compactness, is required; this concept is related to that of asymptotically well-behaved functions

    Standard Bundle Methods: Untrusted Models and Duality

    Get PDF
    We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function

    A warm-start approach for large-scale stochastic linear programs

    Get PDF

    Multistage quadratic stochastic programming

    Full text link
    Multistage stochastic programming is an important tool in medium to long term planning where there are uncertainties in the data. In this thesis, we consider a special case of multistage stochastic programming in which each subprogram is a convex quadratic program. The results are also applicable if the quadratic objectives are replaced by convex piecewise quadratic functions. Convex piecewise quadratic functions have important application in financial planning problems as they can be used as very flexible risk measures. The stochastic programming problems can be used as multi-period portfolio planning problems tailored to the need of individual investors. Using techniques from convex analysis and sensitivity analysis, we show that each subproblem of a multistage quadratic stochastic program is a polyhedral piecewise quadratic program with convex Lipschitz objective. The objective of any subproblem is differentiable with Lipschitz gradient if all its descendent problems have unique dual variables, which can be guaranteed if the linear independence constraint qualification is satisfied. Expression for arbitrary elements of the subdifferential and generalized Hessian at a point can be calculated for quadratic pieces that are active at the point. Generalized Newton methods with linesearch are proposed for solving multistage quadratic stochastic programs. The algorithms converge globally. If the piecewise quadratic objective is differentiable and strictly convex at the solution, then convergence is also finite. A generalized Newton algorithm is implemented in Matlab. Numerical experiments have been carried out to demonstrate its effectiveness. The algorithm is tested on random data with 3, 4 and 5 stages with a maximum of 315 scenarios. The algorithm has also been successfully applied to two sets of test data from a capacity expansion problem and a portfolio management problem. Various strategies have been implemented to improve the efficiency of the proposed algorithm. We experimented with trust region methods with different parameters, using an advanced solution from a smaller version of the original problem and sorting the stochastic right hand sides to encourage faster convergence. The numerical results show that the proposed generalized Newton method is a highly accurate and effective method for multistage quadratic stochastic programs. For problems with the same number of stages, solution times increase linearly with the number of scenarios

    Advances in Interior Point Methods for Large-Scale Linear Programming

    Get PDF
    This research studies two computational techniques that improve the practical performance of existing implementations of interior point methods for linear programming. Both are based on the concept of symmetric neighbourhood as the driving tool for the analysis of the good performance of some practical algorithms. The symmetric neighbourhood adds explicit upper bounds on the complementarity pairs, besides the lower bound already present in the common N−1 neighbourhood. This allows the algorithm to keep under control the spread among complementarity pairs and reduce it with the barrier parameter ÎŒ. We show that a long-step feasible algorithm based on this neighbourhood is globally convergent and converges in O(nL) iterations. The use of the symmetric neighbourhood and the recent theoretical under- standing of the behaviour of Mehrotra’s corrector direction motivate the introduction of a weighting mechanism that can be applied to any corrector direction, whether originating from Mehrotra’s predictor–corrector algorithm or as part of the multiple centrality correctors technique. Such modification in the way a correction is applied aims to ensure that any computed search direction can positively contribute to a successful iteration by increasing the overall stepsize, thus avoid- ing the case that a corrector is rejected. The usefulness of the weighting strategy is documented through complete numerical experiments on various sets of publicly available test problems. The implementation within the hopdm interior point code shows remarkable time savings for large-scale linear programming problems. The second technique develops an efficient way of constructing a starting point for structured large-scale stochastic linear programs. We generate a computation- ally viable warm-start point by solving to low accuracy a stochastic problem of much smaller dimension. The reduced problem is the deterministic equivalent program corresponding to an event tree composed of a restricted number of scenarios. The solution to the reduced problem is then expanded to the size of the problem instance, and used to initialise the interior point algorithm. We present theoretical conditions that the warm-start iterate has to satisfy in order to be successful. We implemented this technique in both the hopdm and the oops frameworks, and its performance is verified through a series of tests on problem instances coming from various stochastic programming sources

    A New Unblocking Technique to Warmstart Interior Point Methods Based on Sensitivity Analysis

    Get PDF
    One of the main drawbacks associated with Interior Point Methods (IPM) is the perceived lack of an efficient warmstarting scheme which would enable the use of information from a previous solution of a similar problem. Recently there has been renewed interest in the subject. A common problem with warmstarting for IPM is that an advanced starting point which is close to the boundary of the feasible region, as is typical, might lead to blocking of the search direction. Several techniques have been proposed to address this issue. Most of these aim to lead the iterate back into the interior of the feasible region- we classify them as either “modification steps” or “unblocking steps ” depending on whether the modification is taking place before solving the modified problem to prevent future problems, or during the solution if and when problems become apparent. A new “unblocking” strategy is suggested which attempts to directly address the issue of blocking by performing sensitivity analysis on the Newton step with the aim of increasing the size of the step that can be taken. This analysis is used in a new technique to warmstar

    Decomposition Methods in Column Generation and Data-Driven Stochastic Optimization

    Get PDF
    In this thesis, we are focused on tackling large-scale problems arising in two-stage stochastic optimization and the related Dantzig-Wolfe decomposition. We start with a deterministic setting, where we consider linear programs with a block-structure, but data cannot be stored centrally due to privacy concerns or decentralized storage of large datasets. The larger portion of the thesis is dedicated to the stochastic setting, where we study two-stage distributionally robust optimization under the Wasserstein ambiguity set to tackle problems with limited data. In Chapter 2, joint work with Shabbir Ahmed, we propose a fully distributed Dantzig-Wolfe decomposition (DWD) algorithm using the Alternating Direction Method of Multipliers (ADMM) method. DWD is a classical algorithm used to solve large-scale linear programs whose constraint matrix is a set of independent blocks coupled with a set of linking rows but requires to solve a master problem centrally, which can be undesirable or infeasible in certain cases due to privacy concerns or decentralized storage of data. To this end, we develop a consensus-based Dantzig-Wolfe decomposition algorithm where the master problem is solved in a distributed fashion. We detail the computational and algorithmic challenges of our method, provide bounds on the optimality gap and feasibility violation, and perform extensive computational experiments on instances of the cutting stock problem and synthetic instances using a Message Passing Interface (MPI) implementation, where we obtain high-quality solutions in reasonable time. In Chapter 3 and 4, we turn our focus to stochastic optimization, specifically applications where data is scarce and the underlying probability distribution is difficult to estimate. Chapter 3 is joint work with Anirudh Subramanyam and Kibaek Kim. Here, we consider two-stage conic DRO under the Wasserstein ambiguity set with zero-one uncertainties. We are motivated by problems arising in network optimization, where binary random variables represent failures of network components. We are interested in applications where such failures are rare and have a high impact, making it difficult to estimate failure probabilities. By using ideas from bilinear programming and penalty methods, we provide tractable approximations of our two-stage DRO model which can be iteratively improved using lift-and-project techniques. We illustrate the computational and out-of-sample performance of our method on the optimal power flow problem with random transmission line failures and a multi-commodity network design problem with random node failures. In Chapter 4, joint work with Alejandro Toriello and George Nemhauser, we study a two-stage model which arises in natural disaster management applications, where the first stage is a facility location problem, deciding where to open facilities and pre-allocate resources, and the second stage is a fixed-charge transportation problem, routing resources to affected areas after a disaster. We solve a two-stage DRO model under the Wasserstein set to deal with the lack of available data. The presence of binary variables in the second stage significantly complicates the problem. We develop an efficient column-and-constraint generation algorithm by leveraging the structure of our support set and second-stage value function, and show our results extend to the case where the second stage is a fixed-charge network flow problem. We provide a detailed discussion on our implementation, and end the chapter with computational experiments on synthetic instances and a case study of hurricane threats on the coastal states of the United States. We end the thesis with concluding remarks and potential directions for future research.Ph.D

    Integrality and cutting planes in semidefinite programming approaches for combinatorial optimization

    Get PDF
    Many real-life decision problems are discrete in nature. To solve such problems as mathematical optimization problems, integrality constraints are commonly incorporated in the model to reflect the choice of finitely many alternatives. At the same time, it is known that semidefinite programming is very suitable for obtaining strong relaxations of combinatorial optimization problems. In this dissertation, we study the interplay between semidefinite programming and integrality, where a special focus is put on the use of cutting-plane methods. Although the notions of integrality and cutting planes are well-studied in linear programming, integer semidefinite programs (ISDPs) are considered only recently. We show that manycombinatorial optimization problems can be modeled as ISDPs. Several theoretical concepts, such as the ChvĂĄtal-Gomory closure, total dual integrality and integer Lagrangian duality, are studied for the case of integer semidefinite programming. On the practical side, we introduce an improved branch-and-cut approach for ISDPs and a cutting-plane augmented Lagrangian method for solving semidefinite programs with a large number of cutting planes. Throughout the thesis, we apply our results to a wide range of combinatorial optimization problems, among which the quadratic cycle cover problem, the quadratic traveling salesman problem and the graph partition problem. Our approaches lead to novel, strong and efficient solution strategies for these problems, with the potential to be extended to other problem classes

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Advances in interior point methods and column generation

    Get PDF
    In this thesis we study how to efficiently combine the column generation technique (CG) and interior point methods (IPMs) for solving the relaxation of a selection of integer programming problems. In order to obtain an efficient method a change in the column generation technique and a new reoptimization strategy for a primal-dual interior point method are proposed. It is well-known that the standard column generation technique suffers from unstable behaviour due to the use of optimal dual solutions that are extreme points of the restricted master problem (RMP). This unstable behaviour slows down column generation so variations of the standard technique which rely on interior points of the dual feasible set of the RMP have been proposed in the literature. Among these techniques, there is the primal-dual column generation method (PDCGM) which relies on sub-optimal and well-centred dual solutions. This technique dynamically adjusts the column generation tolerance as the method approaches optimality. Also, it relies on the notion of the symmetric neighbourhood of the central path so sub-optimal and well-centred solutions are obtained. We provide a thorough theoretical analysis that guarantees the convergence of the primal-dual approach even though sub-optimal solutions are used in the course of the algorithm. Additionally, we present a comprehensive computational study of the solution of linear relaxed formulations obtained after applying the Dantzig-Wolfe decomposition principle to the cutting stock problem (CSP), the vehicle routing problem with time windows (VRPTW), and the capacitated lot sizing problem with setup times (CLSPST). We compare the performance of the PDCGM with the standard column generation method (SCGM) and the analytic centre cutting planning method (ACCPM). Overall, the PDCGM achieves the best performance when compared to the SCGM and the ACCPM when solving challenging instances from a column generation perspective. One important characteristic of this column generation strategy is that no speci c tuning is necessary and the algorithm poses the same level of difficulty as standard column generation method. The natural stabilization available in the PDCGM due to the use of sub-optimal well-centred interior point solutions is a very attractive feature of this method. Moreover, the larger the instance, the better is the relative performance of the PDCGM in terms of column generation iterations and CPU time. The second part of this thesis is concerned with the development of a new warmstarting strategy for the PDCGM. It is well known that taking advantage of the previously solved RMP could lead to important savings in solving the modified RMP. However, this is still an open question for applications arising in an integer optimization context and the PDCGM. Despite the current warmstarting strategy in the PDCGM working well in practice, it does not guarantee full feasibility restorations nor considers the quality of the warmstarted iterate after new columns are added. The main motivation of the design of the new warmstarting strategy presented in this thesis is to close this theoretical gap. Under suitable assumptions, the warmstarting procedure proposed in this thesis restores primal and dual feasibilities after the addition of new columns in one step. The direction is determined so that the modi cation of small components at a particular solution is not large. Additionally, the strategy enables control over the new duality gap by considering an expanded symmetric neighbourhood of the central path. As observed from our computational experiments solving CSP and VRPTW, one can conclude that the warmstarting strategies for the PDCGM are useful when dense columns are added to the RMP (CSP), since they consistently reduce the CPU time and also the number of iterations required to solve the RMPs on average. On the other hand, when sparse columns are added (VRPTW), the coldstart used by the interior point solver HOPDM becomes very efficient so warmstarting does not make the task of solving the RMPs any easier
    corecore