1,580 research outputs found

    Robust long-term production planning

    Get PDF

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    A Polyhedral Study of Mixed 0-1 Set

    Get PDF
    We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set

    How to pack trapezoids: exact and evolutionary algorithms

    Get PDF
    The purposes of this paper are twofold. In the first, we describe an exact polynomial-time algorithm for the pair sequencing problem and show how this method can be used to pack fixed-height trapezoids into a single bin such that interitem wastage is minimised. We then go on to examine how this algorithm can be combined with bespoke evolutionary and local search methods for tackling the multiple-bin version of this problem—one that is closely related to one-dimensional bin packing. In the course of doing this, a number of ideas surrounding recombination, diversity, and genetic repair are also introduced and analysed

    Detection of Core-Periphery Structure in Networks Using Spectral Methods and Geodesic Paths

    Full text link
    We introduce several novel and computationally efficient methods for detecting "core--periphery structure" in networks. Core--periphery structure is a type of mesoscale structure that includes densely-connected core vertices and sparsely-connected peripheral vertices. Core vertices tend to be well-connected both among themselves and to peripheral vertices, which tend not to be well-connected to other vertices. Our first method, which is based on transportation in networks, aggregates information from many geodesic paths in a network and yields a score for each vertex that reflects the likelihood that a vertex is a core vertex. Our second method is based on a low-rank approximation of a network's adjacency matrix, which can often be expressed as a tensor-product matrix. Our third approach uses the bottom eigenvector of the random-walk Laplacian to infer a coreness score and a classification into core and peripheral vertices. We also design an objective function to (1) help classify vertices into core or peripheral vertices and (2) provide a goodness-of-fit criterion for classifications into core versus peripheral vertices. To examine the performance of our methods, we apply our algorithms to both synthetically-generated networks and a variety of networks constructed from real-world data sets.Comment: This article is part of EJAM's December 2016 special issue on "Network Analysis and Modelling" (available at https://www.cambridge.org/core/journals/european-journal-of-applied-mathematics/issue/journal-ejm-volume-27-issue-6/D245C89CABF55DBF573BB412F7651ADB

    Models and algorithms for decomposition problems

    Get PDF
    This thesis deals with the decomposition both as a solution method and as a problem itself. A decomposition approach can be very effective for mathematical problems presenting a specific structure in which the associated matrix of coefficients is sparse and it is diagonalizable in blocks. But, this kind of structure may not be evident from the most natural formulation of the problem. Thus, its coefficient matrix may be preprocessed by solving a structure detection problem in order to understand if a decomposition method can successfully be applied. So, this thesis deals with the k-Vertex Cut problem, that is the problem of finding the minimum subset of nodes whose removal disconnects a graph into at least k components, and it models relevant applications in matrix decomposition for solving systems of equations by parallel computing. The capacitated k-Vertex Separator problem, instead, asks to find a subset of vertices of minimum cardinality the deletion of which disconnects a given graph in at most k shores and the size of each shore must not be larger than a given capacity value. Also this problem is of great importance for matrix decomposition algorithms. This thesis also addresses the Chance-Constrained Mathematical Program that represents a significant example in which decomposition techniques can be successfully applied. This is a class of stochastic optimization problems in which the feasible region depends on the realization of a random variable and the solution must optimize a given objective function while belonging to the feasible region with a probability that must be above a given value. In this thesis, a decomposition approach for this problem is introduced. The thesis also addresses the Fractional Knapsack Problem with Penalties, a variant of the knapsack problem in which items can be split at the expense of a penalty depending on the fractional quantity

    The bi-objective travelling salesman problem with profits and its connection to computer networks.

    Get PDF
    This is an interdisciplinary work in Computer Science and Operational Research. As it is well known, these two very important research fields are strictly connected. Among other aspects, one of the main areas where this interplay is strongly evident is Networking. As far as most recent decades have seen a constant growing of every kind of network computer connections, the need for advanced algorithms that help in optimizing the network performances became extremely relevant. Classical Optimization-based approaches have been deeply studied and applied since long time. However, the technology evolution asks for more flexible and advanced algorithmic approaches to model increasingly complex network configurations. In this thesis we study an extension of the well known Traveling Salesman Problem (TSP): the Traveling Salesman Problem with Profits (TSPP). In this generalization, a profit is associated with each vertex and it is not necessary to visit all vertices. The goal is to determine a route through a subset of nodes that simultaneously minimizes the travel cost and maximizes the collected profit. The TSPP models the problem of sending a piece of information through a network where, in addition to the sending costs, it is also important to consider what “profit” this information can get during its routing. Because of its formulation, the right way to tackled the TSPP is by Multiobjective Optimization algorithms. Within this context, the aim of this work is to study new ways to solve the problem in both the exact and the approximated settings, giving all feasible instruments that can help to solve it, and to provide experimental insights into feasible networking instances

    Learning to compare nodes in branch and bound with graph neural networks

    Full text link
    En informatique, la résolution de problèmes NP-difficiles en un temps raisonnable est d’une grande importance : optimisation de la chaîne d’approvisionnement, planification, routage, alignement de séquences biologiques multiples, inference dans les modèles graphiques pro- babilistes, et même certains problèmes de cryptographie sont tous des examples de la classe NP-complet. En pratique, nous modélisons beaucoup d’entre eux comme un problème d’op- timisation en nombre entier, que nous résolvons à l’aide de la méthodologie séparation et évaluation. Un algorithme de ce style divise un espace de recherche pour l’explorer récursi- vement (séparation), et obtient des bornes d’optimalité en résolvant des relaxations linéaires sur les sous-espaces (évaluation). Pour spécifier un algorithme, il faut définir plusieurs pa- ramètres, tel que la manière d’explorer les espaces de recherche, de diviser une recherche l’espace une fois exploré, ou de renforcer les relaxations linéaires. Ces politiques peuvent influencer considérablement la performance de résolution. Ce travail se concentre sur une nouvelle manière de dériver politique de recherche, c’est à dire le choix du prochain sous-espace à séparer étant donné une partition en cours, en nous servant de l’apprentissage automatique profond. Premièrement, nous collectons des données résumant, sur une collection de problèmes donnés, quels sous-espaces contiennent l’optimum et quels ne le contiennent pas. En représentant ces sous-espaces sous forme de graphes bipartis qui capturent leurs caractéristiques, nous entraînons un réseau de neurones graphiques à déterminer la probabilité qu’un sous-espace contienne la solution optimale par apprentissage supervisé. Le choix d’un tel modèle est particulièrement utile car il peut s’adapter à des problèmes de différente taille sans modifications. Nous montrons que notre approche bat celle de nos concurrents, consistant à des modèles d’apprentissage automatique plus simples entraînés à partir des statistiques du solveur, ainsi que la politique par défaut de SCIP, un solveur open-source compétitif, sur trois familles NP-dures: des problèmes de recherche de stables de taille maximum, de flots de réseau multicommodité à charge fixe, et de satisfiabilité maximum.In computer science, solving NP-hard problems in a reasonable time is of great importance, such as in supply chain optimization, scheduling, routing, multiple biological sequence align- ment, inference in probabilistic graphical models, and even some problems in cryptography. In practice, we model many of them as a mixed integer linear optimization problem, which we solve using the branch and bound framework. An algorithm of this style divides a search space to explore it recursively (branch) and obtains optimality bounds by solving linear relaxations in such sub-spaces (bound). To specify an algorithm, one must set several pa- rameters, such as how to explore search spaces, how to divide a search space once it has been explored, or how to tighten these linear relaxations. These policies can significantly influence resolution performance. This work focuses on a novel method for deriving a search policy, that is, a rule for select- ing the next sub-space to explore given a current partitioning, using deep machine learning. First, we collect data summarizing which subspaces contain the optimum, and which do not. By representing these sub-spaces as bipartite graphs encoding their characteristics, we train a graph neural network to determine the probability that a subspace contains the optimal so- lution by supervised learning. The choice of such design is particularly useful as the machine learning model can automatically adapt to problems of different sizes without modifications. We show that our approach beats the one of our competitors, consisting of simpler machine learning models trained from solver statistics, as well as the default policy of SCIP, a state- of-the-art open-source solver, on three NP-hard benchmarks: generalized independent set, fixed-charge multicommodity network flow, and maximum satisfiability problems

    Optimizing the Efficiency of the United States Organ Allocation System through Region Reorganization

    Get PDF
    Allocating organs for transplantation has been controversial in the United States for decades. Two main allocation approaches developed in the past are (1) to allocate organs to patients with higher priority at the same locale; (2) to allocate organs to patients with the greatest medical need regardless of their locations. To balance these two allocation preferences, the U.S. organ transplantation and allocation network has lately implemented a three-tier hierarchical allocation system, dividing the U.S. into 11 regions, composed of 59 Organ Procurement Organizations (OPOs). At present, an procured organ is offered first at the local level, and then regionally and nationally. The purpose of allocating organs at the regional level is to increase the likelihood that a donor-recipient match exists, compared to the former allocation approach, and to increase the quality of the match, compared to the latter approach. However, the question of which regional configuration is the most efficient remains unanswered. This dissertation develops several integer programming models to find the most efficient set of regions. Unlike previous efforts, our model addresses efficient region design for the entire hierarchical system given the existing allocation policy. To measure allocation efficiency, we use the intra-regional transplant cardinality. Two estimates are developed in this dissertation. One is a population-based estimate; the other is an estimate based on the situation where there is only one waiting list nationwide. The latter estimate is a refinement of the former one in that it captures the effect of national-level allocation and heterogeneity of clinical and demographic characteristics among donors and patients. To model national-level allocation, we apply a modeling technique similar to spill-and-recapture in the airline fleet assignment problem. A clinically based simulation model is used in this dissertation to estimate several necessary parameters in the analytic model and to verify the optimal regional configuration obtained from the analytic model. The resulting optimal region design problem is a large-scale set-partitioning problem in whichthere are too many columns to handle explicitly. Given this challenge, we adapt branch and price in this dissertation. We develop a mixed-integer programming pricing problem that is both theoretically and practically hard to solve. To alleviate this existing computational difficulty, we apply geographic decomposition to solve many smaller-scale pricing problems based on pre-specified subsets of OPOs instead of a big pricing problem. When solving each smaller-scale pricing problem, we also generate multiple ``promising' regions that are not necessarily optimal to the pricing problem. In addition, we attempt to develop more efficient solutions for the pricing problem by studying alternative formulations and developing strong valid inequalities. The computational studies in this dissertation use clinical data and show that (1) regional reorganization is beneficial; (2) our branch-and-price application is effective in solving the optimal region design problem
    corecore