1,157 research outputs found

    The stochastic vehicle routing problem : a literature review, part II : solution methods

    Get PDF
    Building on the work of Gendreau et al. (Oper Res 44(3):469–477, 1996), and complementing the first part of this survey, we review the solution methods used for the past 20 years in the scientific literature on stochastic vehicle routing problems (SVRP). We describe the methods and indicate how they are used when dealing with stochastic vehicle routing problems. Keywords: vehicle routing (VRP), stochastic programmingm, SVRPpublishedVersio

    Reinforcement learning based local search for grouping problems: A case study on graph coloring

    Get PDF
    Grouping problems aim to partition a set of items into multiple mutually disjoint subsets according to some specific criterion and constraints. Grouping problems cover a large class of important combinatorial optimization problems that are generally computationally difficult. In this paper, we propose a general solution approach for grouping problems, i.e., reinforcement learning based local search (RLS), which combines reinforcement learning techniques with descent-based local search. The viability of the proposed approach is verified on a well-known representative grouping problem (graph coloring) where a very simple descent-based coloring algorithm is applied. Experimental studies on popular DIMACS and COLOR02 benchmark graphs indicate that RLS achieves competitive performances compared to a number of well-known coloring algorithms

    Matheuristics: using mathematics for heuristic design

    Get PDF
    Matheuristics are heuristic algorithms based on mathematical tools such as the ones provided by mathematical programming, that are structurally general enough to be applied to different problems with little adaptations to their abstract structure. The result can be metaheuristic hybrids having components derived from the mathematical model of the problems of interest, but the mathematical techniques themselves can define general heuristic solution frameworks. In this paper, we focus our attention on mathematical programming and its contributions to developing effective heuristics. We briefly describe the mathematical tools available and then some matheuristic approaches, reporting some representative examples from the literature. We also take the opportunity to provide some ideas for possible future development

    Branch-and-Price for Prescriptive Contagion Analytics

    Full text link
    Predictive contagion models are ubiquitous in epidemiology, social sciences, engineering, and management. This paper formulates a prescriptive contagion analytics model where a decision-maker allocates shared resources across multiple segments of a population, each governed by continuous-time dynamics. We define four real-world problems under this umbrella: vaccine distribution, vaccination centers deployment, content promotion, and congestion mitigation. These problems feature a large-scale mixed-integer non-convex optimization structure with constraints governed by ordinary differential equations, combining the challenges of discrete optimization, non-linear optimization, and continuous-time system dynamics. This paper develops a branch-and-price methodology for prescriptive contagion analytics based on: (i) a set partitioning reformulation; (ii) a column generation decomposition; (iii) a state-clustering algorithm for discrete-decision continuous-state dynamic programming; and (iv) a tri-partite branching scheme to circumvent non-linearities. Extensive experiments show that the algorithm scales to very large and otherwise-intractable instances, outperforming state-of-the-art benchmarks. Our methodology provides practical benefits in contagion systems; in particular, it can increase the effectiveness of a vaccination campaign by an estimated 12-70%, resulting in 7,000 to 12,000 extra saved lives over a three-month horizon mirroring the COVID-19 pandemic. We provide an open-source implementation of the methodology in an online repository to enable replication

    Multi-Modal Mean-Fields via Cardinality-Based Clamping

    Get PDF
    Mean Field inference is central to statistical physics. It has attracted much interest in the Computer Vision community to efficiently solve problems expressible in terms of large Conditional Random Fields. However, since it models the posterior probability distribution as a product of marginal probabilities, it may fail to properly account for important dependencies between variables. We therefore replace the fully factorized distribution of Mean Field by a weighted mixture of such distributions, that similarly minimizes the KL-Divergence to the true posterior. By introducing two new ideas, namely, conditioning on groups of variables instead of single ones and using a parameter of the conditional random field potentials, that we identify to the temperature in the sense of statistical physics to select such groups, we can perform this minimization efficiently. Our extension of the clamping method proposed in previous works allows us to both produce a more descriptive approximation of the true posterior and, inspired by the diverse MAP paradigms, fit a mixture of Mean Field approximations. We demonstrate that this positively impacts real-world algorithms that initially relied on mean fields.Comment: Submitted for review to CVPR 201

    Algorithme du simplexe en nombres entiers avec décomposition

    Get PDF
    RÉSUMÉ : L’objectif gĂ©nĂ©ral de cette thĂšse est de dĂ©velopper un algorithme efficace pour la rĂ©solution du problĂšme de partitionnement d’ensemble (SPP : Set Partitioning Problem). Le SPP est un problĂšme trĂšs connu de la programmation linĂ©aire en nombres entiers. Il consiste Ă  partitionner un ensemble de tĂąches (ex : vols d’avion, segments de trajet d’autobus, ...) en sous-ensembles (routes de vĂ©hicules ou suites de tĂąches effectuĂ©es par une personne) de sorte que les sous-ensembles sĂ©lectionnĂ©s aient un coĂ»t total minimum tout en couvrant chaque tĂąche exactement une et une seule fois. Le SPP est en gĂ©nĂ©ral rĂ©solu par la mĂ©thode branch-and-price. Cette mĂ©thode peut ĂȘtre excessivement lente dans le cas de problĂšmes difficiles de trĂšs grande taille. Le paradigme derriĂšre la mĂ©thode est “dual” ou “tout ou rien” dans le sens oĂč une solution entiĂšre du problĂšme est en gĂ©nĂ©ral obtenue trĂšs tard ou Ă  la fin de la rĂ©solution pour les problĂšmes pratiques. Avoir une solution entiĂšre rapidement est trĂšs apprĂ©ciĂ© en pratique. En plus, il est trĂšs frĂ©quent, en pratique, de vouloir optimiser un problĂšme pour lequel on connaĂźt dĂ©jĂ  une solution avec une bonne information primale que l’on veut, au moins, amĂ©liorer. La mĂ©thode branch-and-price n’est pas adaptĂ©e pour tirer avantage d’une telle situation. Une approche “primale” est mieux appropriĂ©e pour la rĂ©solution du SPP (ex : planification d’horaires de chauffeurs d’autobus). L’approche, en question, s’appelle l’algorithme du simplexe en nombres entiers et consiste Ă  commencer d’une solution initiale connue et effectuer une sĂ©rie de petites amĂ©liorations de façon Ă  produire une suite de solutions prĂ©sentant des coĂ»ts dĂ©croissants et convergeant vers une solution optimale. Plusieurs auteurs ont proposĂ© par le passĂ© des algorithmes pour rĂ©soudre le SPP d’une façon primale. Malheureusement, aucun de ces algorithmes n’est assez efficace pour ĂȘtre utilisĂ© en pratique. Le principal facteur derriĂšre cela est la nature fortement dĂ©gĂ©nĂ©rĂ©e du SPP. Pour chaque solution, il y a un trĂšs grand nombre de bases permettant d’identifier des mouvements vers des solutions voisines. Le phĂ©nomĂšne de la dĂ©gĂ©nĂ©rescence implique qu’il est difficile, et mĂȘme combinatoire, de passer d’une solution entiĂšre Ă  une autre ; mais ces algorithmes ne proposent pas de techniques efficaces pour pallier ce phĂ©nomĂšne. Donc, plus prĂ©cisĂ©ment, l’objectif de cette thĂšse est de proposer une implĂ©mentation de l’algorithme du simplexe en nombres entiers pratique et efficace contre la dĂ©gĂ©nĂ©rescence. C’est-Ă -dire que l’implĂ©mentation recherchĂ©e doit ĂȘtre capable de rĂ©soudre des SPPs de grande taille ; et elle doit aussi ĂȘtre en mesure d’exploiter une solution initiale donnĂ©e et produire, itĂ©rativement et dans des temps raisonnablement courts, des solutions amĂ©liorĂ©es. Pour ce faire, nous commençons, dans un premier travail, par l’exploitation des idĂ©es d’un algorithme appelĂ© simplexe primal amĂ©liorĂ© (IPS : Improved Primal Simplex). Cet algorithme cerne efficacement le phĂ©nomĂšne de la dĂ©gĂ©nĂ©rescence lors de la rĂ©solution d’un programme linĂ©aire quelconque. Ainsi, nous proposons un algorithme inspirĂ© par IPS et adaptĂ© au contexte du SPP (nombres entiers). L’algorithme, baptisĂ© simplexe en nombres entiers avec dĂ©composition, commence Ă  partir d’une solution initiale avec une bonne information primale. Comme dans IPS, il amĂ©liore itĂ©rativement la solution courante en dĂ©composant le problĂšme original en deux sous-problĂšmes : un premier sous-problĂšme, appelĂ© problĂšme rĂ©duit, qui est un petit SPP, permet d’amĂ©liorer la solution en ne considĂ©rant que les colonnes dites compatibles avec la solution courante. Un deuxiĂšme sous-problĂšme, appelĂ© problĂšme complĂ©mentaire, ne considĂ©rant que les colonnes incompatibles avec la solution courante, permet de trouver une direction de descente combinant plusieurs variables qui garantit d’avoir une meilleure solution, mais pas nĂ©cessairement entiĂšre. Le domaine rĂ©alisable du problĂšme complĂ©mentaire, relaxĂ© de toute contrainte d’intĂ©gralitĂ©, reprĂ©sente un cĂŽne des directions rĂ©alisables. Une contrainte supplĂ©mentaire, appelĂ©e contrainte de normalisation, lui est ajoutĂ©e pour assurer qu’il soit bornĂ©. Les directions qu’il trouve ont la caractĂ©ristique d’ĂȘtre minimales dans le sens oĂč elles ne contiennent aucune sous-direction rĂ©alisable. Cette caractĂ©ristique, accompagnĂ©e d’une technique de pricing partiel (partial pricing) appelĂ©e multi-phase, fait que, dans la majoritĂ© des itĂ©rations, le problĂšme complĂ©mentaire trouve directement des directions qui mĂšnent vers des solutions entiĂšres. Dans le restant des cas, oĂč les directions trouvĂ©es mĂšnent vers des solutions fractionnaires, un branchement en profondeur permet souvent d’aboutir rapidement Ă  une solution entiĂšre. Nous avons testĂ© ce nouvel algorithme sur des instances d’horaires de chauffeurs d’autobus ayant 1600 contraintes et 570000 variables. L’algorithme atteint la solution optimale, ou une solution assez proche, pour la majoritĂ© de ces instances ; et ceci dans un temps qui reprĂ©sente une fraction de ce qu’aurait demandĂ© un solveur commercial tel que CPLEX; sachant que ce dernier n’arrive mĂȘme pas Ă  trouver une premiĂšre solution rĂ©alisable aprĂšs une durĂ©e de plus de 10 heures d’exĂ©cution sur certaines instances. L’algorithme, dans sa premiĂšre version, reprĂ©sente Ă  notre avis une premiĂšre implĂ©mentation de l’algorithme du simplexe en nombres entiers Ă  ĂȘtre capable de rĂ©soudre des instances de SPP de grande taille dans des temps acceptables en pratique. Toutefois, il souffre encore de quelques limitations telles que la nĂ©cessitĂ© de dĂ©velopper un branchement complexe pour pouvoir amĂ©liorer la qualitĂ© des solutions trouvĂ©es. Cela est dĂ» au fait que le problĂšme complĂ©mentaire prĂ©sente une structure difficilement exploitable par CPLEX. Une autre limitation de cette implĂ©mentation est qu’elle ne permet pas de supporter les contraintes supplĂ©mentaires qui ne sont pas de type partitionnement. Dans un deuxiĂšme travail, nous amĂ©liorons notre algorithme en gĂ©nĂ©ralisant certains aspects de son concept. Notre objectif dans cette Ă©tape est d’éviter d’implĂ©menter un branchement complexe et exhaustif tout en permettant Ă  notre algorithme de pouvoir considĂ©rer des contraintes supplĂ©mentaires. Nous revoyons donc la façon avec laquelle l’algorithme dĂ©compose le problĂšme et nous proposons une mĂ©thode de dĂ©composition dynamique oĂč l’intĂ©gralitĂ© de la solution est contrĂŽlĂ©e au niveau du problĂšme rĂ©duit au lieu du problĂšme complĂ©mentaire. Ainsi, le problĂšme complĂ©mentaire n’est plus responsable de trouver une direction menant Ă  une solution entiĂšre mais plutĂŽt une direction de descente quelconque ; et c’est le problĂšme rĂ©duit qui s’occupe de chercher une solution entiĂšre autour de cette direction de descente en dĂ©lĂ©guant le branchement au solveur commercial. Avec cette dĂ©composition dynamique, l’algorithme atteint une solution optimale, ou presque optimale, pour toutes les instances, tout en maintenant le mĂȘme ordre de grandeur des temps d’exĂ©cution de la version prĂ©cĂ©dente. Dans un troisiĂšme travail, nous nous donnons l’objectif d’amĂ©liorer la performance de l’algorithme. Nous visons de rendre les temps d’exĂ©cution de l’algorithme plus rapides sans perdre tous les avantages introduits par le deuxiĂšme travail. Nous constatons, alors, que la minimalitĂ© des directions de descente exigĂ©e par le problĂšme complĂ©mentaire est un facteur qui favorise l’intĂ©gralitĂ© des solutions subsĂ©quentes, mais reprĂ©sente, aussi, un Ă©lĂ©ment de ralentissement puisqu’il force l’algorithme Ă  faire plusieurs petits pas, vers des solutions adjacentes uniquement, en direction de sa solution finale. Nous changeons, alors, le modĂšle du problĂšme complĂ©mentaire pour lui permettre de trouver des directions de descente non minimales. Le nouveau modĂšle arrive, ainsi, Ă  aller directement vers des solutions entiĂšres non adjacentes prĂ©sentant des amĂ©liorations considĂ©rables dans le coĂ»t ; et ceci en un nombre d’itĂ©rations trĂšs rĂ©duit qui ne dĂ©passe pas deux itĂ©rations pour les instances de grande taille dans nos tests. Une solution optimale est toujours atteinte et le temps global d’exĂ©cution est rĂ©duit par au moins un facteur de cinq sur toutes les instances. Ce facteur est de l’ordre de dix pour les instances de grande taille. Avec ces trois travaux, nous pensons avoir proposĂ© un algorithme du simplexe en nombres entiers efficace qui produit des solutions de qualitĂ© en des temps courts.----------ABSTRACT : The general objective of this thesis is to develop an efficient algorithm for solving the Set Partitioning Problem (SPP). SPP is a well known problem of integer programming. Its goal is to partition a set of tasks (e.g. plane flights, bus trip segments, ...) into subsets (vehicle routes or set of tasks performed by a person) such that the selected subsets have a minimum total cost while covering each task exactly once. SPP is usually solved by the method of branch-and-price. This method can be excessively slow when solving difficult problems of large size. The paradigm behind the method is “dual” or “all or nothing” in the sense that an integer solution of the problem is generally obtained very late or at the end of the solution process for large instances. In practice, having an integer solution quickly is very appreciated. Also, it is very common in practice to solve a problem for which a solution having good primal information is already known. We want to, at least, improve that solution. The branch-and-price method is not suitable to take advantage of such a situation. A “primal” approach fits better for the solution of the SPP (e.g. bus driver scheduling). The approach is called the Integral Simplex algorithm. It consists of starting from a known initial solution and performing a series of small improvements so as to produce a series of solutions with decreasing costs and converging towards an optimal solution. Several authors have, in the past, proposed algorithms for solving the SPP in using a primal paradigm. Unfortunately, none of these algorithms is effective enough to be used in practice. The main factor behind this is the highly degenerate nature of the SPP. For each solution, there is a very large number of bases that permit to identify transitions to neighbor solutions. The degeneracy implies that it is difficult, and even combinatorial, to go from an integer solution to another; but these algorithms do not offer effective techniques to overcome degeneracy. So, specifically, the aim of this thesis is to introduce an implementation of the Integral Simplex that is effective against degeneracy in practice. This means that the intended implementation must be able to solve SPPs of large size; and it must also be able to benefit from a given initial solution and produce, iteratively and in reasonably short time, improved solutions. To do this, we first use ideas from an algorithm called Improved Primal Simplex (IPS) algorithm. This algorithm helps the primal simplex algorithm in effectively coping with degeneracy when solving linear programs. Thus, we propose an algorithm inspired by IPS and adapted to the context of the SPP. The algorithm, called Integral Simplex Using Decomposition, starts from an initial solution with good primal information. As in IPS, it iteratively improves the current solution by decomposing the original problem into two sub-problems: a first sub-problem, called reduced problem, which is a small completely non-degenerate SPP that improves the solution by considering only the columns that are said to be compatible with the current solution. A second sub-problem, called complementary problem, considers only the columns that are incompatible with the current solution. The complementary problem finds a descent direction, combining several variables, that guarantees to have a better solution; but not necessarily integer. The feasible domain of the complementary problem, where all the integrality constraints are relaxed, is a cone of feasible directions. An additional constraint, called normalization constraint, is added to ensure that the problem is bounded. The directions found are minimal in the sense that they do not contain any feasible sub-direction. This minimality feature, combined with a partial pricing technique called multi-phase, helps the complementary problem in finding directions that directly lead to integer solutions in the majority of iterations. In the remaining cases, where the directions lead to fractional solutions, a quick deep branching often lead to an integer solution. We tested the new algorithm on bus driver scheduling problems having 1600 rows and 570000 columns. The algorithm reaches an optimal, or near optimal, solution for the majority of these problems; solution times represent a fraction of what would have taken a commercial solver such as CPLEX. The latter does not even find a first feasible solution within a 10 hour runtime period for some of those problems. We think that the algorithm, under its first version, is a first implementation of the integral simplex method that was able to solve large SPP problems within acceptable times in practice. However, it still has some limitations such as the need to develop a complex branching to improve the quality of the solutions found. This is due to the fact that the complementary problem presents a structure that is not suitable to handle. Another limitation of this implementation is the fact that it does not consider supplementary non partitioning constraints. In a second paper, we improve our algorithm generalizing certain aspects of its concept. Our goal in this step is to avoid implementing a complex and exhaustive branching while allowing our algorithm to consider supplementary constraints. We review, therefore, the way in which the algorithm decomposes the problem and propose a method of dynamic decomposition where the integrality of the solution is controlled within the reduced problem instead of the complementary problem. Thus, the complementary problem is no longer responsible for finding a direction leading to an integer solution but only a descent direction; and the reduced problem handles the integrality of the solution, while searching around this descent direction, by delegating the branching to the commercial solver. With this dynamic decomposition, the algorithm reaches an optimal or near optimal solution for all instances; while maintaining execution times comparable to the ones from the previous version. In a third paper, we target the objective of improving the performance of the algorithm. We aim to make the algorithm run faster without losing the benefits introduced by the second paper. We observe, then, that the minimality of descent directions, required by the complementary problem, forces the algorithm to make small steps towards adjacent solutions. We then change the model of the complementary problem to let it find non-minimal descent directions. The new model is, thus, able to go directly to non-adjacent integer solutions with significant improvements in the cost, in a very limited number of iterations that does not exceed two iterations for large problems in our tests. An optimal solution is always reached and the execution time is reduced by at least a factor of five on all instances. This factor is about ten for large instances. With these three papers, we believe we have introduced an effective integral simplex algorithm that produces quality solutions in short times
    • 

    corecore