297 research outputs found

    On High-Performance Benders-Decomposition-Based Exact Methods with Application to Mixed-Integer and Stochastic Problems

    Get PDF
    RÉSUMÉ : La programmation stochastique en nombres entiers (SIP) combine la difficulté de l’incertitude et de la non-convexité et constitue une catégorie de problèmes extrêmement difficiles à résoudre. La résolution efficace des problèmes SIP est d’une grande importance en raison de leur vaste applicabilité. Par conséquent, l’intérêt principal de cette dissertation porte sur les méthodes de résolution pour les SIP. Nous considérons les SIP en deux étapes et présentons plusieurs algorithmes de décomposition améliorés pour les résoudre. Notre objectif principal est de développer de nouveaux schémas de décomposition et plusieurs techniques pour améliorer les méthodes de décomposition classiques, pouvant conduire à résoudre optimalement divers problèmes SIP. Dans le premier essai de cette thèse, nous présentons une revue de littérature actualisée sur l’algorithme de décomposition de Benders. Nous fournissons une taxonomie des améliorations algorithmiques et des stratégies d’accélération de cet algorithme pour synthétiser la littérature et pour identifier les lacunes, les tendances et les directions de recherche potentielles. En outre, nous discutons de l’utilisation de la décomposition de Benders pour développer une (méta- )heuristique efficace, décrire les limites de l’algorithme classique et présenter des extensions permettant son application à un plus large éventail de problèmes. Ensuite, nous développons diverses techniques pour surmonter plusieurs des principaux inconvénients de l’algorithme de décomposition de Benders. Nous proposons l’utilisation de plans de coupe, de décomposition partielle, d’heuristiques, de coupes plus fortes, de réductions et de stratégies de démarrage à chaud pour pallier les difficultés numériques dues aux instabilités, aux inefficacités primales, aux faibles coupes d’optimalité ou de réalisabilité, et à la faible relaxation linéaire. Nous testons les stratégies proposées sur des instances de référence de problèmes de conception de réseau stochastique. Des expériences numériques illustrent l’efficacité des techniques proposées. Dans le troisième essai de cette thèse, nous proposons une nouvelle approche de décomposition appelée méthode de décomposition primale-duale. Le développement de cette méthode est fondé sur une reformulation spécifique des sous-problèmes de Benders, où des copies locales des variables maîtresses sont introduites, puis relâchées dans la fonction objective. Nous montrons que la méthode proposée atténue significativement les inefficacités primales et duales de la méthode de décomposition de Benders et qu’elle est étroitement liée à la méthode de décomposition duale lagrangienne. Les résultats de calcul sur divers problèmes SIP montrent la supériorité de cette méthode par rapport aux méthodes classiques de décomposition. Enfin, nous étudions la parallélisation de la méthode de décomposition de Benders pour étendre ses performances numériques à des instances plus larges des problèmes SIP. Les variantes parallèles disponibles de cette méthode appliquent une synchronisation rigide entre les processeurs maître et esclave. De ce fait, elles souffrent d’un important déséquilibre de charge lorsqu’elles sont appliquées aux problèmes SIP. Cela est dû à un problème maître difficile qui provoque un important déséquilibre entre processeur et charge de travail. Nous proposons une méthode Benders parallèle asynchrone dans un cadre de type branche-et-coupe. L’assouplissement des exigences de synchronisation entraine des problèmes de convergence et d’efficacité divers auxquels nous répondons en introduisant plusieurs techniques d’accélération et de recherche. Les résultats indiquent que notre algorithme atteint des taux d’accélération plus élevés que les méthodes synchronisées conventionnelles et qu’il est plus rapide de plusieurs ordres de grandeur que CPLEX 12.7.----------ABSTRACT : Stochastic integer programming (SIP) combines the difficulty of uncertainty and non-convexity, and constitutes a class of extremely challenging problems to solve. Efficiently solving SIP problems is of high importance due to their vast applicability. Therefore, the primary focus of this dissertation is on solution methods for SIPs. We consider two-stage SIPs and present several enhanced decomposition algorithms for solving them. Our main goal is to develop new decomposition schemes and several acceleration techniques to enhance the classical decomposition methods, which can lead to efficiently solving various SIP problems to optimality. In the first essay of this dissertation, we present a state-of-the-art survey of the Benders decomposition algorithm. We provide a taxonomy of the algorithmic enhancements and the acceleration strategies of this algorithm to synthesize the literature, and to identify shortcomings, trends and potential research directions. In addition, we discuss the use of Benders decomposition to develop efficient (meta-)heuristics, describe the limitations of the classical algorithm, and present extensions enabling its application to a broader range of problems. Next, we develop various techniques to overcome some of the main shortfalls of the Benders decomposition algorithm. We propose the use of cutting planes, partial decomposition, heuristics, stronger cuts, and warm-start strategies to alleviate the numerical challenges arising from instabilities, primal inefficiencies, weak optimality/feasibility cuts, and weak linear relaxation. We test the proposed strategies with benchmark instances from stochastic network design problems. Numerical experiments illustrate the computational efficiency of the proposed techniques. In the third essay of this dissertation, we propose a new and high-performance decomposition approach, called Benders dual decomposition method. The development of this method is based on a specific reformulation of the Benders subproblems, where local copies of the master variables are introduced and then priced out into the objective function. We show that the proposed method significantly alleviates the primal and dual shortfalls of the Benders decomposition method and it is closely related to the Lagrangian dual decomposition method. Computational results on various SIP problems show the superiority of this method compared to the classical decomposition methods as well as CPLEX 12.7. Finally, we study parallelization of the Benders decomposition method. The available parallel variants of this method implement a rigid synchronization among the master and slave processors. Thus, it suffers from significant load imbalance when applied to the SIP problems. This is mainly due to having a hard mixed-integer master problem that can take hours to be optimized. We thus propose an asynchronous parallel Benders method in a branchand- cut framework. However, relaxing the synchronization requirements entails convergence and various efficiency problems which we address them by introducing several acceleration techniques and search strategies. In particular, we propose the use of artificial subproblems, cut generation, cut aggregation, cut management, and cut propagation. The results indicate that our algorithm reaches higher speedup rates compared to the conventional synchronized methods and it is several orders of magnitude faster than CPLEX 12.7

    Standard Bundle Methods: Untrusted Models and Duality

    Get PDF
    We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function

    Decentralised Optimisation and Control in Electrical Power Systems

    Get PDF
    Emerging smart-grid-enabling technologies will allow an unprecedented degree of observability and control at all levels in a power system. Combined with flexible demand devices (e.g. electric vehicles or various household appliances), increased distributed generation, and the potential development of small scale distributed storage, they could allow procuring energy at minimum cost and environmental impact. That however presupposes real-time coordination of demand of individual households and industries down at the distribution level, with generation and renewables at the transmission level. In turn this implies the need to solve energy management problems of a much larger scale compared to the one we currently solve today. This of course raises significant computational and communications challenges. The need for an answer to these problems is reflected in today’s power systems literature where a significant number of papers cover subjects such as generation and/or demand management at both transmission and/or distribution, electric vehicle charging, voltage control devices setting, etc. The methods used are centralized or decentralized, handling continuous and/or discrete controls, approximate or exact, and incorporate a wide range of problem formulations. All these papers tackle aspects of the same problem, i.e. the close to real-time determination of operating set-points for all controllable devices available in a power system. Yet, a consensus regarding the associated formulation and time-scale of application has not been reached. Of course, given the large scale of the problem, decentralization is unavoidably part of the solution. In this work we explore the existing and developing trends in energy management and place them into perspective through a complete framework that allows optimizing energy usage at all levels in a power system

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Optimal Convergence Rates for the Proximal Bundle Method

    Full text link
    We study convergence rates of the classic proximal bundle method for a variety of nonsmooth convex optimization problems. We show that, without any modification, this algorithm adapts to converge faster in the presence of smoothness or a H\"older growth condition. Our analysis reveals that with a constant stepsize, the bundle method is adaptive, yet it exhibits suboptimal convergence rates. We overcome this shortcoming by proposing nonconstant stepsize schemes with optimal rates. These schemes use function information such as growth constants, which might be prohibitive in practice. We provide a parallelizable variant of the bundle method that can be applied without prior knowledge of function parameters while maintaining near-optimal rates. The practical impact of this scheme is limited since we incur a (parallelizable) log factor in the complexity. These results improve on the scarce existing convergence rates and provide a unified analysis approach across problem settings and algorithmic details. Numerical experiments support our findings

    Design and Analysis of Efficient Freight Transportation Networks in a Collaborative Logistics Environment

    Get PDF
    The increase in total freight volumes, reducing volume per freight unit, and delivery deadlines have increased the burden on freight transportation systems of today. With the evolution of freight demand trends, there also needs to be an evolution in the freight distribution processes. Today\u27s freight transportation processes have a lot of inefficiencies that could be streamlined, thus preventing concerns like increased operational costs, road congestion, and environmental degradation. Collaborative logistics is one of the approaches where supply chain partners collaborate horizontally or/and vertically to create a centralized network that is more efficient and serves towards a common goal or objective. In this dissertation, we study intermodal transportation, and cross-docking, two major pillars of efficient, cheap, and faster freight transportation in a collaborative environment. We design an intermodal network from a centralized network perspective where all the participants intermodal operators, shippers, carriers, and customers strive towards a synchronized and cost-efficient freight network. Also, a cross-dock scheduling problem is presented for competitive shippers using a centralized cross-dock facility. The problem develops a fast heuristic and meta-heuristic approach to solve large-scale real-world problems and draws key insights from a cross-dock operator and inbound carrier\u27s perspectives

    Decomposition Methods in Column Generation and Data-Driven Stochastic Optimization

    Get PDF
    In this thesis, we are focused on tackling large-scale problems arising in two-stage stochastic optimization and the related Dantzig-Wolfe decomposition. We start with a deterministic setting, where we consider linear programs with a block-structure, but data cannot be stored centrally due to privacy concerns or decentralized storage of large datasets. The larger portion of the thesis is dedicated to the stochastic setting, where we study two-stage distributionally robust optimization under the Wasserstein ambiguity set to tackle problems with limited data. In Chapter 2, joint work with Shabbir Ahmed, we propose a fully distributed Dantzig-Wolfe decomposition (DWD) algorithm using the Alternating Direction Method of Multipliers (ADMM) method. DWD is a classical algorithm used to solve large-scale linear programs whose constraint matrix is a set of independent blocks coupled with a set of linking rows but requires to solve a master problem centrally, which can be undesirable or infeasible in certain cases due to privacy concerns or decentralized storage of data. To this end, we develop a consensus-based Dantzig-Wolfe decomposition algorithm where the master problem is solved in a distributed fashion. We detail the computational and algorithmic challenges of our method, provide bounds on the optimality gap and feasibility violation, and perform extensive computational experiments on instances of the cutting stock problem and synthetic instances using a Message Passing Interface (MPI) implementation, where we obtain high-quality solutions in reasonable time. In Chapter 3 and 4, we turn our focus to stochastic optimization, specifically applications where data is scarce and the underlying probability distribution is difficult to estimate. Chapter 3 is joint work with Anirudh Subramanyam and Kibaek Kim. Here, we consider two-stage conic DRO under the Wasserstein ambiguity set with zero-one uncertainties. We are motivated by problems arising in network optimization, where binary random variables represent failures of network components. We are interested in applications where such failures are rare and have a high impact, making it difficult to estimate failure probabilities. By using ideas from bilinear programming and penalty methods, we provide tractable approximations of our two-stage DRO model which can be iteratively improved using lift-and-project techniques. We illustrate the computational and out-of-sample performance of our method on the optimal power flow problem with random transmission line failures and a multi-commodity network design problem with random node failures. In Chapter 4, joint work with Alejandro Toriello and George Nemhauser, we study a two-stage model which arises in natural disaster management applications, where the first stage is a facility location problem, deciding where to open facilities and pre-allocate resources, and the second stage is a fixed-charge transportation problem, routing resources to affected areas after a disaster. We solve a two-stage DRO model under the Wasserstein set to deal with the lack of available data. The presence of binary variables in the second stage significantly complicates the problem. We develop an efficient column-and-constraint generation algorithm by leveraging the structure of our support set and second-stage value function, and show our results extend to the case where the second stage is a fixed-charge network flow problem. We provide a detailed discussion on our implementation, and end the chapter with computational experiments on synthetic instances and a case study of hurricane threats on the coastal states of the United States. We end the thesis with concluding remarks and potential directions for future research.Ph.D
    • …
    corecore