359 research outputs found

    Reusing optimal TSP solutions for locally modified input instances : Extended abstract

    Get PDF
    Given an instance of an optimization problem together with an optimal solution, we consider the scenario in which this instance is modified locally. In graph problems, e. g., a singular edge might be removed or added, or an edge weight might be varied, etc. For a problem U and such a local modification operation, let lm-U (local-modification- U) denote the resulting problem. The question is whether it is possible to exploit the additional knowledge of an optimal solution to the original instance or not, i. e., whether lm-U is computationally more tractable than U. Here, we give non-trivial examples both of problems where this is and problems where this is not the case4th IFIP International Conference on Theoretical Computer ScienceRed de Universidades con Carreras en Informática (RedUNCI

    Reusing optimal TSP solutions for locally modified input instances : Extended abstract

    Get PDF
    Given an instance of an optimization problem together with an optimal solution, we consider the scenario in which this instance is modified locally. In graph problems, e. g., a singular edge might be removed or added, or an edge weight might be varied, etc. For a problem U and such a local modification operation, let lm-U (local-modification- U) denote the resulting problem. The question is whether it is possible to exploit the additional knowledge of an optimal solution to the original instance or not, i. e., whether lm-U is computationally more tractable than U. Here, we give non-trivial examples both of problems where this is and problems where this is not the case4th IFIP International Conference on Theoretical Computer ScienceRed de Universidades con Carreras en Informática (RedUNCI

    On the Approximability of TSP on Local Modifications of Optimally Solved Instances

    Get PDF
    Given an instance of TSP together with an optimal solution, we consider the scenario in which this instance is modified locally, where a local modification consists in the alteration of the weight of a single edge. More generally, for a problem U, let LM-U (local-modification-U) denote the same problem as U, but in LM-U, we are also given an optimal solution to an instance from which the input instance can be derived by a local modification. The question is how to exploit this additional knowledge, i.e., how to devise better algorithms for LM-U than for U. Note that this need not be possible in all cases: The general problem of LM-TSP is as hard as TSP itself, i.e., unless P=NP, there is no polynomial-time p(n)-approximation algorithm for LM-TSP for any polynomial p. Moreover, LM-TSP where inputs must satisfy the β-triangle inequality (LM-Δβ-TSP) remains NP-hard for all β>½. However, for LM-Δ-TSP (i.e., metric LM-TSP), we will present an efficient 1.4-approximation algorithm. In other words, the additional information enables us to do better than if we simply used Christofides' algorithm for the modified input. Similarly, for all 1<β<3.34899, we achieve a better approximation ratio for LM-Δ-TSP than for Δβ-TSP. For ½≤β<1, we show how to obtain an approximation ratio arbitrarily close to 1, for sufficiently large input graphs

    Approximation hardness of deadline-TSP reoptimization

    Get PDF
    AbstractGiven an instance of an optimization problem together with an optimal solution, we consider the scenario in which this instance is modified locally. In graph problems, e.g., a singular edge might be removed or added, or an edge weight might be varied, etc. For a problem U and such a local modification operation, let lm-U (local-modification-U) denote the resulting problem. The question is whether it is possible to exploit the additional knowledge of an optimal solution to the original instance or not, i.e.,whether lm-U is computationally more tractable than U. While positive examples are known e.g. for metric TSP, we give some negative examples here: Metric TSP with deadlines (time windows), if a single deadline or the cost of a single edge is modified, exhibits the same lower bounds on the approximability in these local-modification versions as those currently known for the original problem

    Finding Optimal Solutions With Neighborly Help

    Get PDF
    Can we efficiently compute optimal solutions to instances of a hard problem from optimal solutions to neighboring (i.e., locally modified) instances? For example, can we efficiently compute an optimal coloring for a graph from optimal colorings for all one-edge-deleted subgraphs? Studying such questions not only gives detailed insight into the structure of the problem itself, but also into the complexity of related problems; most notably graph theory\u27s core notion of critical graphs (e.g., graphs whose chromatic number decreases under deletion of an arbitrary edge) and the complexity-theoretic notion of minimality problems (also called criticality problems, e.g., recognizing graphs that become 3-colorable when an arbitrary edge is deleted). We focus on two prototypical graph problems, Colorability and Vertex Cover. For example, we show that it is NP-hard to compute an optimal coloring for a graph from optimal colorings for all its one-vertex-deleted subgraphs, and that this remains true even when optimal solutions for all one-edge-deleted subgraphs are given. In contrast, computing an optimal coloring from all (or even just two) one-edge-added supergraphs is in P. We observe that Vertex Cover exhibits a remarkably different behavior, demonstrating the power of our model to delineate problems from each other more precisely on a structural level. Moreover, we provide a number of new complexity results for minimality and criticality problems. For example, we prove that Minimal-3-UnColorability is complete for DP (differences of NP sets), which was previously known only for the more amenable case of deleting vertices rather than edges. For Vertex Cover, we show that recognizing beta-vertex-critical graphs is complete for Theta_2^p (parallel access to NP), obtaining the first completeness result for a criticality problem for this class

    Reoptimization of the Shortest Common Superstring Problem

    Get PDF
    A reoptimization problem describes the following scenario: given an instance of an optimization problem together with an optimal solution for it, we want to find a good solution for a locally modified instance. In this paper, we deal with reoptimization variants of the shortest common superstring problem (SCS) where the local modifications consist of adding or removing a single string. We show the NP-hardness of these reoptimization problems and design several approximation algorithms for them. First, we use a technique of iteratively using any SCS algorithm to design an approximation algorithm for the reoptimization variant of adding a string whose approximation ratio is arbitrarily close to 8/5 and another algorithm for deleting a string with a ratio tending to 13/7. Both algorithms significantly improve over the best currently known SCS approximation ratio of 2.5. Additionally, this iteration technique can be used to design an improved SCS approximation algorithm (without reoptimization) if the input instance contains a long string, which might be of independent interest. However, these iterative algorithms are relatively slow. Thus, we present another, faster approximation algorithm for inserting a string which is based on cutting the given optimal solution and achieves an approximation ratio of 11/6. Moreover, we give some lower bounds on the approximation ratio which can be achieved by algorithms that use such cutting strategie
    corecore