115 research outputs found

    An Investigation of the Recoverable Robust Assignment Problem

    Get PDF

    Robust Assignments via Ear Decompositions and Randomized Rounding

    Get PDF
    Many real-life planning problems require making a priori decisions before all parameters of the problem have been revealed. An important special case of such problem arises in scheduling problems, where a set of tasks needs to be assigned to the available set of machines or personnel (resources), in a way that all tasks have assigned resources, and no two tasks share the same resource. In its nominal form, the resulting computational problem becomes the \emph{assignment problem} on general bipartite graphs. This paper deals with a robust variant of the assignment problem modeling situations where certain edges in the corresponding graph are \emph{vulnerable} and may become unavailable after a solution has been chosen. The goal is to choose a minimum-cost collection of edges such that if any vulnerable edge becomes unavailable, the remaining part of the solution contains an assignment of all tasks. We present approximation results and hardness proofs for this type of problems, and establish several connections to well-known concepts from matching theory, robust optimization and LP-based techniques.Comment: Full version of ICALP 2016 pape

    How to Secure Matchings Against Edge Failures

    Get PDF
    Suppose we are given a bipartite graph that admits a perfect matching and an adversary may delete any edge from the graph with the intention of destroying all perfect matchings. We consider the task of adding a minimum cost edge-set to the graph, such that the adversary never wins. We show that this problem is equivalent to covering a digraph with non-trivial strongly connected components at minimal cost. We provide efficient exact and approximation algorithms for this task. In particular, for the unit-cost problem, we give a log_2 n-factor approximation algorithm and a polynomial-time algorithm for chordal-bipartite graphs. Furthermore, we give a fixed parameter algorithm for the problem parameterized by the treewidth of the input graph. For general non-negative weights we give tight upper and lower approximation bounds relative to the Directed Steiner Forest problem. Additionally we prove a dichotomy theorem characterizing minor-closed graph classes which allow for a polynomial-time algorithm. To obtain our results, we exploit a close relation to the classical Strong Connectivity Augmentation problem as well as directed Steiner problems

    Maintaining Perfect Matchings at Low Cost

    Get PDF
    The min-cost matching problem suffers from being very sensitive to small changes of the input. Even in a simple setting, e.g., when the costs come from the metric on the line, adding two nodes to the input might change the optimal solution completely. On the other hand, one expects that small changes in the input should incur only small changes on the constructed solutions, measured as the number of modified edges. We introduce a two-stage model where we study the trade-off between quality and robustness of solutions. In the first stage we are given a set of nodes in a metric space and we must compute a perfect matching. In the second stage 2k new nodes appear and we must adapt the solution to a perfect matching for the new instance. We say that an algorithm is (alpha,beta)-robust if the solutions constructed in both stages are alpha-approximate with respect to min-cost perfect matchings, and if the number of edges deleted from the first stage matching is at most beta k. Hence, alpha measures the quality of the algorithm and beta its robustness. In this setting we aim to balance both measures by deriving algorithms for constant alpha and beta. We show that there exists an algorithm that is (3,1)-robust for any metric if one knows the number 2k of arriving nodes in advance. For the case that k is unknown the situation is significantly more involved. We study this setting under the metric on the line and devise a (10,2)-robust algorithm that constructs a solution with a recursive structure that carefully balances cost and redundancy

    Sparse Recovery of Positive Signals with Minimal Expansion

    Get PDF
    We investigate the sparse recovery problem of reconstructing a high-dimensional non-negative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor networks, where dense measurements are not practically feasible. One possible construction uses the adjacency matrices of expander graphs, which often leads to recovery algorithms much more efficient than â„“1\ell_1 minimization. However, to date, constructions based on expanders have required very high expansion coefficients which can potentially make the construction of such graphs difficult and the size of the recoverable sets small. In this paper, we construct sparse measurement matrices for the recovery of non-negative vectors, using perturbations of the adjacency matrix of an expander graph with much smaller expansion coefficient. We present a necessary and sufficient condition for â„“1\ell_1 optimization to successfully recover the unknown vector and obtain expressions for the recovery threshold. For certain classes of measurement matrices, this necessary and sufficient condition is further equivalent to the existence of a "unique" vector in the constraint set, which opens the door to alternative algorithms to â„“1\ell_1 minimization. We further show that the minimal expansion we use is necessary for any graph for which sparse recovery is possible and that therefore our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much faster than â„“1\ell_1 optimization. Finally, we demonstrate through theoretical bounds, as well as simulation, that our method is robust to noise and approximate sparsity.Comment: 25 pages, submitted for publicatio

    An Approximation Algorithm for the Exact Matching Problem in Bipartite Graphs

    Get PDF
    In 1982 Papadimitriou and Yannakakis introduced the Exact Matching problem, in which given a red and blue edge-colored graph G and an integer k one has to decide whether there exists a perfect matching in G with exactly k red edges. Even though a randomized polynomial-time algorithm for this problem was quickly found a few years later, it is still unknown today whether a deterministic polynomial-time algorithm exists. This makes the Exact Matching problem an important candidate to test the RP=P hypothesis. In this paper we focus on approximating Exact Matching. While there exists a simple algorithm that computes in deterministic polynomial-time an almost perfect matching with exactly k red edges, not a lot of work focuses on computing perfect matchings with almost k red edges. In fact such an algorithm for bipartite graphs running in deterministic polynomial-time was published only recently (STACS\u2723). It outputs a perfect matching with k\u27 red edges with the guarantee that 0.5k ? k\u27 ? 1.5k. In the present paper we aim at approximating the number of red edges without exceeding the limit of k red edges. We construct a deterministic polynomial-time algorithm, which on bipartite graphs computes a perfect matching with k\u27 red edges such that k/3 ? k\u27 ? k

    Bulk-robust assignment problems: hardness, approximability and algorithms

    Get PDF
    This thesis studies robust assignment problems with focus on computational complexity. Assignment problems are well-studied combinatorial optimization problems with numerous practical applications, for instance in production planning. Classical approaches to optimization expect the input data for a problem to be given precisely. In contrast, real-life optimization problems are modeled using forecasts resulting in uncertain problem parameters. This fact can be taken into account using the framework of robust optimization. An instance of the classical assignment problem is represented using a bipartite graph accompanied by a cost function. The goal is to find a minimum-cost assignment, i.e., a set of resources (edges or nodes in the graph) defining a maximum matching. Most models for robust assignment problems suggested in the literature capture only uncertainty in the costs, i.e., the task is to find an assignment minimizing the cost in a worst-case scenario. The contribution of this thesis is the introduction and investigation of the Robust Assignment Problem (RAP) which models edge and node failures while the costs are deterministic. A scenario is defined by a set of resources that may fail simultaneously. If a scenario emerges, the corresponding resources are deleted from the graph. RAP seeks to find a set of resources of minimal cost which is robust against all possible incidents, i.e., a set of resources containing an assignment for all scenarios. In production planning for example, lack of materials needed to complete an order can be encoded as an edge failure and production line maintenance corresponds to a node failure. The main findings of this thesis are hardness of approximation and NP-hardness results for both versions of RAP, even in case of single edge (or node) failures. These results are complemented by approximation algorithms matching the theoretical lower bounds asymptotically. Additionally, we study a new related problem concerning k-robust matchings. A perfect matching in a graph is kk-robust if the graph remains perfectly matchable after the deletion of any k matching edges from the graph. We address the following question: How many edges have to be added to a graph to make a fixed perfect matching k-robust? We show that, in general, this problem is as hard as both aforementioned variants of RAP. From an application point of view, this result implies that robustification of an existent infrastructure is not easier than designing a new one from scratch.Diese Dissertation behandelt robuste Zuordnungsprobleme mit dem Schwerpunkt auf deren komlexitätstheoretischen Eigenschaften. Zuordnungsprobleme sind gut untersuchte kombinatorische Optimierungsprobleme mit vielen praktischen Anwendungen, z. B. in der Produktionsplanung. Klassische Ansätze der Optimierung gehen davon aus, dass die Inputdaten eines Problems exakt gegeben sind, wohingegen Optimierungsprobleme aus der Praxis mit Hilfe von Voraussagen modelliert werden. Daraus folgen unsichere Problemparameter, woran die Robuste Optimierung ansetzt. Die Unsicherheit wird mit Hilfe einer Szenarienmenge modelliert, die alle möglichen Ausprägungen der Problemparameter beschreibt. Eine Instanz des klassischen Zordnungsproblems wird mit Hilfe eines Graphen und einer Kostenfunktion beschrieben. Die Aufgabe besteht darin, eine Zuordnung mit minimalen Kosten zu finden. Eine Zuordnung ist eine Teilmenge an Ressourcen (Kanten oder Knoten des Graphen), die ein kardinalitätsmaximales Matching induziert. In der Literatur sind überwiegend robuste Zuordnungsprobleme untersucht, die Unsicherheit in den Kosten behandeln, in diesem Fall besteht die Aufgabe darin, eine Zuordnung mit minimalen Kosten im Worst-Case-Szenario zu finden. Diese Dissertation dient der Einführung und Untersuchung des Robust Assignment Problem (RAP) welches Kanten- und Knotenausfälle modelliert; wobei die Kosten determinisitsch sind. Ein Szenario ist durch jene Teilmenge an Ressourcen definiert, welche gleichzeitig ausfallen können. Wenn ein Szenario eintritt, werden die jeweils ausfallenden Ressourcen aus dem Graphen entfernt. In RAP besteht das Ziel darin, eine Menge an Ressourcen mit minimalen Kosten zu finden, die robust gegenüber allen möglichen Ereignissen ist, d. h. eine Ressourcenmenge die für alle Szenarien eine gültige Zuordnung enthält. So kann beispielsweise in der Produktionsplanung der Mangel an Materialien, die für einen Auftrag benötigt werden, als Kantenausfall und die wartungsbedingte Abschaltung einer Produktionslinie als Knotenausfall modelliert werden. Die Hauptergebnisse dieser Arbeit sind Nichtapproximierbarkeits- und NP-Schwierigkeitsresultate beider RAP-Versionen, die bereits für die Einschränkung zutreffen, dass nur einzelne Kanten oder Knoten ausfallen können. Diese Ergebnisse werden durch Approximationsalgorithmen ergänzt, die die theoretischen Approximationsschranken asymptotisch erreichen. Zusätzlich wird ein neues, verwandtes Optimierungsproblem untersucht, welches sich mit k-robusten Matchings beschäftigt. Ein perfektes Matching in einem Graphen ist k-robust, wenn der Graph nach dem Löschen von k Matchingkanten weiterhin ein perfektes Matching besitzt. Es wird der Frage nachgegangen, wie viele Kanten zum Graphen hinzugefügt werden müssen, um ein gegebenes Matching k-robust zu machen. Dabei wird gezeigt, dass dieses Problem im Allgemeinen aus komplexitätstheoretischer Sicht genauso schwierig ist, wie die zuvor erwähnten RAP-Varianten. Aus der Anwendungsperspektive bedeutet dieses Resultat, dass die Robustifikation einer bestehender Infrastruktur nicht einfacher ist, als sie von Grund auf neu zu entwerfen
    • …
    corecore