80 research outputs found

    Branch-and-price and multicommodity flows

    Get PDF
    Tese de doutoramento em Engenharia de Produção e Sistemas, área de Investigação OperacionalIn this Thesis, we address column generation based methods for linear and integer programming and apply them to three multicommodity flow problems. For (mixed) integer programming problems, the approach taken consists in reformulating an original model, using the Dantzig-Wolfe decomposition principle, and then combining column generation with branch-and-bound (branch-and-price) in order to obtain optimal solutions. The main issue when developing a branch-and-price algorithm is the branching scheme. The approach explored in this work is to branch on the variables of the original model, keeping the structure of the subproblems of the column generation method unchanged. The incorporation of cuts (branch-and-price-and-cut), again without changing the structure of the subproblem, is also explored. Based on that general methodology, we developed a set of C++ classes (ADDing - Automatic Dantzig-Wolfe Decomposition for INteger column Generation), which implements abranch-and-price algorithm. Its main distinctive feature is that it can be used as a “black-box”: all the user is required to do is to provide the original model. ADDing can also be customised to meet a specific problem, if the user is willing to provide a subproblem solver and/or specific branching schemes. We developed column generation based algorithms for three multicommodity flow problems. In this type of problems, it is desired to route a set of commodities through a capacitated network at a minimum cost. In the linear problem, each unit of each commodity is divisible. By using a model with variables associated with paths and circuits, we obtained significant improvements on the solution times over the standard column generation approach, for instances defined in planar networks (in several instances the relative improvement was greater than 60%). In the integer problem, each unit of each commodity is indivisible; the flow of a commodity can be split between different paths, but the flow on each of those paths must be integer. In general, the proposed branch-and-price algorithm was more efficient than Cplex 6.6 in the sets of instances where each commodity is defined by an origin-destination pair; for some of the other sets of instances, Cplex 6.6 gave better time results. In the binary problem, all the flow of each commodity must be routed along a single path. We developed a branch-and-price algorithm based on a knapsack decomposition and modified (by using a different branching scheme) a previously described branch-and-price-and-cut algorithm based on a path decomposition. The outcome of the computational tests was surprising, given that it is usually assumed that specific methods are more efficient than general ones. For the instances tested, a state-of-the-art general-purpose (Cplex 8.1) gave, in general, much better results than both decomposition approaches.Nesta Tese, abordam-se métodos de geração de colunas para programação linear e inteira. A sua aplicação é feita em três problemas de fluxo multicomodidade. Para problemas de programação inteira (mista), a abordagem seguida é a de reformular um modelo original, utilizando o princípio de decomposição de Dantzig-Wolfe, e combinar geração de colunas com o método de partição e avaliação (partição e geração de colunas) para a obtenção de soluções óptimas. A questão essencial no desenvolvimento de um algoritmo deste tipo é a estratégia de partição. A abordagem seguida neste trabalho é a de realizar a partição nas variáveis do modelo original, mantendo a estrutura do subproblema do método de geração de colunas. A incorporação de cortes, ainda sem alteração da estrutura do subproblema, é também explorada. Com base nesta metodologia geral, foi desenvolvido um conjunto de classes em C++ (ADDing - Automatic Dantzig-Wolfe Decomposition for INteger column Generation), que implementa um algorithmo de partição e geração de colunas. A sua característica fundamental é apenas ser requerido ao utilizador a definição de um modelo original. Num modo mais avançado, o utilizador pode implementar algoritmos para resolver o subproblema e/ou esquemas de partição. Foram desenvolvidos algoritmos baseados em geração de colunas para três problemas de fluxo multicomodidade. Neste tipo de problemas, pretende-se encaminhar um conjunto de comodidades através de uma rede capacitada, minimizando o custo. No problema linear, cada unidade de cada comodidade é divisível. Utilizando um modelo com variáveis associadas a caminhos e a circuitos, obtiveram-se melhorias significativas nos tempos de resolução em relação ao método de geração de colunas usual, para instâncias definidas em redes planares (em várias instâncias a melhoria relativa foi superior a 60%). No problema inteiro, cada unidade de cada comodidade é indivisível; o fluxo de uma comodidade pode ser dividido por diferentes caminhos, mas o fluxo em cada um deles tem de ser inteiro. Em geral, o algoritmo de partição e geração de colunas foi mais eficiente do que o software Cplex 6.6 nos conjuntos de instâncias em que cada comodidade é definida por um par origem-destino; para alguns dos outros conjuntos de instâncias, o software Cplex 6.6 obteve melhores resultados. No problema binário, todo o fluxo de cada comodidade apenas pode utilizar um caminho. Foi desenvolvido um algoritmo de partição e geração de colunas baseado numa decomposição de mochila e modificado (através de um esquema de partição diferente) um algoritmo de partição e geração de colunas com cortes, previamente descrito, baseado numa decomposição por caminhos. Os resultados dos testes computacionais foram surpreendentes, dado que é usualmente assumido que métodos específicos são mais eficientes do que métodos gerais. Para as instâncias testadas, o software Cplex 8.1 obteve, em geral, resultados muito melhores do que as duas decomposições

    Lifted edges as connectivity priors for multicut and disjoint paths

    Get PDF
    This work studies graph decompositions and their representation by 0/1 labeling of edges. We study two problems. The first is multicut (MC) which represents decompositions of undirected graphs (clustering of nodes into connected components). The second is disjoint paths (DP) in directed acyclic graphs where the clusters correspond to node- disjoint paths. Unlike an alternative representation by node labeling, the number of clusters is not part of the input but is fully determined by the costs of edges. Our main interest is to study connectivity priors represented by so-called lifted edges in the two problems. The cost of a lifted edge expresses whether its endpoints should belong to the same cluster (path) in the optimal decomposition. We call the resulting problems lifted multicut (LMC) and lifted disjoint paths (LDP). The extension of MC to LMC was originally motivated by image segmentation where the information about the connectivity between non-neighboring pixels or superpixels led to a significant quality improvement. After that, LMC was successfully applied to other problems like multiple object tracking (MOT) which is also the main application of our proposed LDP model. Our study of lifted multicut concentrates on partial LMC represented by labeling of a subset of (lifted) edges. Given partial labeling, we conclude that deciding whether a complete LMC consistent with the partial labels exists is NP-complete. Similarly, we conclude that deciding whether an unlabeled edge exists such that its label is determined by the labels of other edges is NP-hard. After that, we present metrics for comparing (partial) graph decompositions. Finally, we study the properties of the LMC polytope. The largest part of this work is dedicated to the proposed LDP problem. We prove that this problem is NP-hard and propose an optimal integer linear programming (ILP) solver. In order to enable its global optimization, we formulate several classes of linear inequalities that produce a high-quality LP relaxation. Additionally, we propose efficient cutting plane algorithms for separating the proposed linear inequalities. Despite the advanced constraints and efficient separation routines, the general time complexity of our optimal ILP solver remains exponential. In order to solve even larger instances, we introduce an approximate LDP solver based on Lagrange decomposition. LDP is a convenient model for MOT because the underlying disjoint paths model naturally leads to trajectories of objects. Moreover, lifted edges encode long-range temporal interactions and thus help to prevent id switches and re-identify persons. Our tracker using the optimal LDP solver achieves nearly optimal assignments w.r.t. input detections. Consequently, it was a leading tracker on three benchmarks of the MOT challenge MOT15/16/17, improving significantly over state-of-the-art at the time of its publication. Our approximate LDP solver enables us to process the MOT15/16/17 benchmarks without sacrificing solution quality and allows for solving large and dense instances of a challenging dataset MOT20. On all these four standard MOT benchmarks we achieved performance comparable or better than state-of-the-art methods (at the time of publication) including our tracker based on the optimal LDP solver.Diese Arbeit studiert Graphenzerlegungen und ihre Repräsentation durch 0/1-wertige Kantenbelegungen. Das erste Problem ist das Mehrfachschnittproblem. Es repräsentiert Zerlegungen von ungerichteten Graphen (Cluster von Knoten sodass jeder Cluster eine Zusammenhangskomponente repräsentiert). Das zweite Problem ist die Suche von disjunkten Pfaden in einem gerichteten azyklischen Graph in dem die Cluster knotendisjunkten Pfaden entsprechen. Im Unterschied zu der alternativen Repräsentation durch Knotenbelegungen ist die Zahl von Clustern nicht im Voraus gegeben, sondern sie ist abhängig von den Kosten der Kanten. Der Fokus dieser Arbeit ist die Erforschung von hochgezogenen Kannten, die eine apriori Information über Verbundenheit von Knoten in Clustern respektive durch Pfade in den zwei Problemen darstellen. Die Kosten einer hochgezogenen Kante drücken aus, ob ihre Knoten zu dem gleichen Cluster (Pfad) in der optimalen Zerlegung gehören sollten. Wir bezeichnen diese neuen Probleme als das hochgezogene Mehrfachschnittproblem und das Problem der hochgezogenen disjunkten Pfade. Die Erweiterung des Mehrfachschnittproblems zu dem hochgezogenen Mehrfachschnittproblem wurde ursprünglich durch die Bildsegmentierung motiviert, für die die Information über Verbundenheit von nicht benachbarten Pixeln oder Superpixeln zu einer bedeutenden Verbesserung der Qualität führte. Danach wurde das hochgezogene Mehrfachschnittproblem zu der Lösung von anderen Problemen wie zum Beispiel der Verfolgung von mehreren Objekten in einem Video angewendet. Diese Aufgabe ist auch die Hauptanwendung des vorgeschlagenen Problems der hochgezogenen disjunkte Pfade. In unserer Untersuchung des hochgezogenen Mehrfachschnittproblems konzentrieren wir uns auf das teilweise hochgezogene Mehrfachschnittproblem. Das Problem wird durch eine Belegung einer Teilmenge der (hochgezogenen) Kanten repräsentiert. Wir beweisen, dass es NP-vollständig ist zu entscheiden, ob ein kompletter hochgezogener Mehrfachschnitt existiert, der einer gegebenen teilweisen Kantenbezeichnung entspricht. In analogerWeise beweisen wir, dass es NP-schwer ist zu entscheiden, ob eine nicht belegte Kante existiert, deren Belegung durch die Belegungen anderer Kanten entschieden ist. Danach präsentieren wir Metriken zum Vergleich von (teilweisen) Graphenzerlegungen. Schließlich untersuchen wir Eigenschaften des hochgezogenen Mehrfachschnitt-Polytops. Der größte Teil dieser Arbeit widmet sich dem von uns vorgeschlagenen Problem der hochgezogenen disjunkten Pfade. Wir beweisen, dass es NP-schwer ist. Wir formulieren es als ein ganzzahliges lineares Optimierungsproblem und implementieren ein Programm für dessen optimale Lösung. Um die globale Optimierung zu ermöglichen, formulieren wir mehrere Klassen von linearen Ungleichungen, die zu einer linearen Relaxierung mit einer hohen Qualität führen. Zusätzlich präsentieren wir ein effektives Schnittebenenverfahren für die Separierung der vorgeschlagenen Ungleichungen. Trotz der fortgeschrittenen Ungleichungen und der Effizienz der Schnittebenenseparierung in unserem optimalen Löser bleibt die allgemeine Komplexität des Algorithmus exponentiell. Um noch kompliziertere Instanzen zu lösen, präsentieren wir einen approximativen Löser, der auf Lagrange-Dualität aufbaut. Hochgezogene disjunkte Pfade sind ein praktisches Modell für die Verfolgung von mehreren Objekten, weil die disjunkten Pfade eine natürliche Repräsentation von Trajektorien der Objekten darstellen. Außerdem repräsentieren die hochgezogenen Kanten Interaktionen einer langen zeitlichen Reichweite. Deswegen helfen sie dieselbe Person in zeitlich weiter auseinander liegenden Zeitpunkten wieder zu identifizieren und Verwechselungen ihrer Identität zu verhindern. Aus diesem Grund war unsere Methode zur Zeit ihrer Publikation die beste für drei Vergleichsdatensätzen MOT Challenge MOT15/16/17 für die Verfolgung von mehreren Objekten. Im Vergleich zu den bisherigen besten Methoden war ihre Leistung sogar bedeutend höher. Unsere approximative Methode für hochgezogene disjunkte Pfade ermöglicht uns die Vergleichsdatensätzen MOT15/16/17 zu verarbeiten ohne die Qualität der Lösungen zu vermindern und erlaubt uns, die großen Instanzen mit hoher Personendichte des anspruchsvolleren Datensatzes MOT20 zu lösen. Zur Zeit ihrer Publikation erreichte die Methode vergleichbare oder bessere Ergebnisse als die bisherigen besten Methoden einschließlich unseres optimalen Löser für hochgezogene disjunkte Pfade

    The application of variational inequality theory to the study of spatial equilibrium and disequilibrium

    Get PDF
    Includes bibliographical references (p. 26-29).Supported by the National Science Foundation VPW Program. RII-880361by A. Nagurney

    An extensive English language bibliography on graph theory and its applications

    Get PDF
    Bibliography on graph theory and its application

    Arc flow formulations based on dynamic programming: Theoretical foundations and applications

    Get PDF
    Network flow formulations are among the most successful tools to solve optimization problems. Such formulations correspond to determining an optimal flow in a network. One particular class of network flow formulations is the arc flow, where variables represent flows on individual arcs of the network. For NP-hard problems, polynomial-sized arc flow models typically provide weak linear relaxations and may have too much symmetry to be efficient in practice. Instead, arc flow models with a pseudo-polynomial size usually provide strong relaxations and are efficient in practice. The interest in pseudo-polynomial arc flow formulations has grown considerably in the last twenty years, in which they have been used to solve many open instances of hard problems. A remarkable advantage of pseudo-polynomial arc flow models is the possibility to solve practical-sized instances directly by a Mixed Integer Linear Programming solver, avoiding the implementation of complex methods based on column generation. In this survey, we present theoretical foundations of pseudo-polynomial arc flow formulations, by showing a relation between their network and Dynamic Programming (DP). This relation allows a better understanding of the strength of these formulations, through a link with models obtained by Dantzig-Wolfe decomposition. The relation with DP also allows a new perspective to relate state-space relaxation methods for DP with arc flow models. We also present a dual point of view to contrast the linear relaxation of arc flow models with that of models based on paths and cycles. To conclude, we review the main solution methods and applications of arc flow models based on DP in several domains such as cutting, packing, scheduling, and routing

    A MATHEMATICAL FRAMEWORK FOR OPTIMIZING DISASTER RELIEF LOGISTICS

    Get PDF
    In today's society that disasters seem to be striking all corners of the globe, the importance of emergency management is undeniable. Much human loss and unnecessary destruction of infrastructure can be avoided with better planning and foresight. When a disaster strikes, various aid organizations often face significant problems of transporting large amounts of many different commodities including food, clothing, medicine, medical supplies, machinery, and personnel from several points of origin to a number of destinations in the disaster areas. The transportation of supplies and relief personnel must be done quickly and efficiently to maximize the survival rate of the affected population. The goal of this research is to develop a comprehensive model that describes the integrated logistics operations in response to natural disasters at the operational level. The proposed mathematical model integrates three main components. First, it controls the flow of several relief commodities from sources through the supply chain until they are delivered to the hands of recipients. Second, it considers a large-scale unconventional vehicle routing problem with mixed pickup and delivery schedules for multiple transportation modes. And third, following FEMA's complex logistics structure, a special facility location problem is considered that involves four layers of temporary facilities at the federal and state levels. Such integrated model provides the opportunity for a centralized operation plan that can effectively eliminate delays and assign the limited resources in a way that is optimal for the entire system. The proposed model is a large-scale mixed integer program. To solve the model, two sets of heuristic algorithms are proposed. For solving the multi-echelon facility location problem, four heuristic approaches are proposed. Also four heuristic algorithms are proposed to solve the general integer vehicle routing problem. Overall, the proposed heuristics could efficiently find optimal or near optimal solution in minutes of CPU time where solving the same problems with a commercial solver needed hours of computation time. Numerical case studies and extensive sensitivity analysis are conducted to evaluate the properties of the model and solution algorithms. The numerical analysis indicated the capabilities of the model to handle large-scale relief operations with adequate details. Solution algorithms were tested for several random generated cases and showed robustness in solution quality as well as computation time

    The production-assembly-distribution system design problem: modeling and solution approaches

    Get PDF
    This dissertation, which consists of four parts, is to (i) present a mixed integer programming model for the strategic design of an assembly system in the international business environment established by the North American Free Trade Agreement (NAFTA) with the focus on modeling the material flow network with assembly operations, (ii) compare different decomposition schemes and acceleration techniques to devise an effective branch-and-price solution approach, (iii) introduce a generalization of Dantzig-Wolf Decomposition (DWD), and (iv) propose a combination of dual-ascent and primal drop heuristics. The model deals with a broad set of design issues (bill-of-materials restrictions, international financial considerations, and material flows through the entire supply chain) using effective modeling devices. The first part especially focuses on modeling material flows in such an assembly system. The second part is to study several schemes for applying DWD to the productionassembly- distribution system design problem (PADSDP). Each scheme exploits selected embedded structures. The research objective is to enhance the rate of DWD convergence in application to PADSDP through formulating a rationale for decomposition by analyzing potential schemes, adopting acceleration techniques, and assessing the impacts of schemes and techniques computationally. Test results provide insights that may be relevant to other applications of DWD. The third part proposes a generalization of column generation, reformulating the master problem with fewer variables at the expense of adding more constraints; the subproblem structure does not change. It shows both analytically and computationally that the reformulation promotes faster convergence to an optimal solution in application to a linear program and to the relaxation of an integer program at each node in the branchand- bound tree. Further, it shows that this reformulation subsumes and generalizes prior approaches that have been shown to improve the rate of convergence in special cases. The last part proposes two dual-ascent algorithms and uses each in combination with a primal drop heuristic to solve the uncapacitated PADSDP, which is formulated as a mixed integer program. Computational results indicate that one combined heuristic finds solutions within 0.15% of optimality in most cases and within reasonable time, an efficacy suiting it well for actual large-scale applications

    On High-Performance Benders-Decomposition-Based Exact Methods with Application to Mixed-Integer and Stochastic Problems

    Get PDF
    RÉSUMÉ : La programmation stochastique en nombres entiers (SIP) combine la difficulté de l’incertitude et de la non-convexité et constitue une catégorie de problèmes extrêmement difficiles à résoudre. La résolution efficace des problèmes SIP est d’une grande importance en raison de leur vaste applicabilité. Par conséquent, l’intérêt principal de cette dissertation porte sur les méthodes de résolution pour les SIP. Nous considérons les SIP en deux étapes et présentons plusieurs algorithmes de décomposition améliorés pour les résoudre. Notre objectif principal est de développer de nouveaux schémas de décomposition et plusieurs techniques pour améliorer les méthodes de décomposition classiques, pouvant conduire à résoudre optimalement divers problèmes SIP. Dans le premier essai de cette thèse, nous présentons une revue de littérature actualisée sur l’algorithme de décomposition de Benders. Nous fournissons une taxonomie des améliorations algorithmiques et des stratégies d’accélération de cet algorithme pour synthétiser la littérature et pour identifier les lacunes, les tendances et les directions de recherche potentielles. En outre, nous discutons de l’utilisation de la décomposition de Benders pour développer une (méta- )heuristique efficace, décrire les limites de l’algorithme classique et présenter des extensions permettant son application à un plus large éventail de problèmes. Ensuite, nous développons diverses techniques pour surmonter plusieurs des principaux inconvénients de l’algorithme de décomposition de Benders. Nous proposons l’utilisation de plans de coupe, de décomposition partielle, d’heuristiques, de coupes plus fortes, de réductions et de stratégies de démarrage à chaud pour pallier les difficultés numériques dues aux instabilités, aux inefficacités primales, aux faibles coupes d’optimalité ou de réalisabilité, et à la faible relaxation linéaire. Nous testons les stratégies proposées sur des instances de référence de problèmes de conception de réseau stochastique. Des expériences numériques illustrent l’efficacité des techniques proposées. Dans le troisième essai de cette thèse, nous proposons une nouvelle approche de décomposition appelée méthode de décomposition primale-duale. Le développement de cette méthode est fondé sur une reformulation spécifique des sous-problèmes de Benders, où des copies locales des variables maîtresses sont introduites, puis relâchées dans la fonction objective. Nous montrons que la méthode proposée atténue significativement les inefficacités primales et duales de la méthode de décomposition de Benders et qu’elle est étroitement liée à la méthode de décomposition duale lagrangienne. Les résultats de calcul sur divers problèmes SIP montrent la supériorité de cette méthode par rapport aux méthodes classiques de décomposition. Enfin, nous étudions la parallélisation de la méthode de décomposition de Benders pour étendre ses performances numériques à des instances plus larges des problèmes SIP. Les variantes parallèles disponibles de cette méthode appliquent une synchronisation rigide entre les processeurs maître et esclave. De ce fait, elles souffrent d’un important déséquilibre de charge lorsqu’elles sont appliquées aux problèmes SIP. Cela est dû à un problème maître difficile qui provoque un important déséquilibre entre processeur et charge de travail. Nous proposons une méthode Benders parallèle asynchrone dans un cadre de type branche-et-coupe. L’assouplissement des exigences de synchronisation entraine des problèmes de convergence et d’efficacité divers auxquels nous répondons en introduisant plusieurs techniques d’accélération et de recherche. Les résultats indiquent que notre algorithme atteint des taux d’accélération plus élevés que les méthodes synchronisées conventionnelles et qu’il est plus rapide de plusieurs ordres de grandeur que CPLEX 12.7.----------ABSTRACT : Stochastic integer programming (SIP) combines the difficulty of uncertainty and non-convexity, and constitutes a class of extremely challenging problems to solve. Efficiently solving SIP problems is of high importance due to their vast applicability. Therefore, the primary focus of this dissertation is on solution methods for SIPs. We consider two-stage SIPs and present several enhanced decomposition algorithms for solving them. Our main goal is to develop new decomposition schemes and several acceleration techniques to enhance the classical decomposition methods, which can lead to efficiently solving various SIP problems to optimality. In the first essay of this dissertation, we present a state-of-the-art survey of the Benders decomposition algorithm. We provide a taxonomy of the algorithmic enhancements and the acceleration strategies of this algorithm to synthesize the literature, and to identify shortcomings, trends and potential research directions. In addition, we discuss the use of Benders decomposition to develop efficient (meta-)heuristics, describe the limitations of the classical algorithm, and present extensions enabling its application to a broader range of problems. Next, we develop various techniques to overcome some of the main shortfalls of the Benders decomposition algorithm. We propose the use of cutting planes, partial decomposition, heuristics, stronger cuts, and warm-start strategies to alleviate the numerical challenges arising from instabilities, primal inefficiencies, weak optimality/feasibility cuts, and weak linear relaxation. We test the proposed strategies with benchmark instances from stochastic network design problems. Numerical experiments illustrate the computational efficiency of the proposed techniques. In the third essay of this dissertation, we propose a new and high-performance decomposition approach, called Benders dual decomposition method. The development of this method is based on a specific reformulation of the Benders subproblems, where local copies of the master variables are introduced and then priced out into the objective function. We show that the proposed method significantly alleviates the primal and dual shortfalls of the Benders decomposition method and it is closely related to the Lagrangian dual decomposition method. Computational results on various SIP problems show the superiority of this method compared to the classical decomposition methods as well as CPLEX 12.7. Finally, we study parallelization of the Benders decomposition method. The available parallel variants of this method implement a rigid synchronization among the master and slave processors. Thus, it suffers from significant load imbalance when applied to the SIP problems. This is mainly due to having a hard mixed-integer master problem that can take hours to be optimized. We thus propose an asynchronous parallel Benders method in a branchand- cut framework. However, relaxing the synchronization requirements entails convergence and various efficiency problems which we address them by introducing several acceleration techniques and search strategies. In particular, we propose the use of artificial subproblems, cut generation, cut aggregation, cut management, and cut propagation. The results indicate that our algorithm reaches higher speedup rates compared to the conventional synchronized methods and it is several orders of magnitude faster than CPLEX 12.7
    • …
    corecore