211 research outputs found

    A 2-2/3 Approximation for the Shortest Superstring Problem

    Get PDF
    Given a collection of strings S={s_1, ..., s_n} over an alphabet \Sigma, a superstring \alpha of S is a string containing each s_i as a substring; that is, for each i, 1\u3c=i\u3c=n, \alpha contains a block of |s_i| consecutive characters that match s_i exactly. The shortest superstring problem is the problem of finding a superstring \alpha of minimum length. The shortest superstring problem has applications in both data compression and computational biology. In data compression, the problem is a part of a general model of string compression proposed by Gallant, Maier and Storer (JCSS \u2780). Much of the recent interest in the problem is due to its application to DNA sequence assembly. The problem has been shown to be NP-hard; in fact, it was shown by Blum et al.(JACM \u2794) to be MAX SNP-hard. The first O(1)-approximation was also due to Blum et al., who gave an algorithm that always returns a superstring no more than 3 times the length of an optimal solution. Several researchers have published results that improve on the approximation ratio; of these, the best previous result is our algorithm ShortString, which achieves a 2 3/4-approximation (WADS \u2795). We present our new algorithm, G-ShortString, which achieves a ratio of 2 2/3. It generalizes the ShortString algorithm, but the analysis differs substantially from that of ShortString. Our previous work identified classes of strings that have a nested periodic structure, and which must be present in the worst case for our algorithms. We introduced machinery to descibe these strings and proved strong structural properties about them. In this paper we extend this study to strings that exhibit a more relaxed form of the same structure, and we use this understanding to obtain our improved result

    Reoptimization of the Shortest Common Superstring Problem

    Get PDF
    A reoptimization problem describes the following scenario: given an instance of an optimization problem together with an optimal solution for it, we want to find a good solution for a locally modified instance. In this paper, we deal with reoptimization variants of the shortest common superstring problem (SCS) where the local modifications consist of adding or removing a single string. We show the NP-hardness of these reoptimization problems and design several approximation algorithms for them. First, we use a technique of iteratively using any SCS algorithm to design an approximation algorithm for the reoptimization variant of adding a string whose approximation ratio is arbitrarily close to 8/5 and another algorithm for deleting a string with a ratio tending to 13/7. Both algorithms significantly improve over the best currently known SCS approximation ratio of 2.5. Additionally, this iteration technique can be used to design an improved SCS approximation algorithm (without reoptimization) if the input instance contains a long string, which might be of independent interest. However, these iterative algorithms are relatively slow. Thus, we present another, faster approximation algorithm for inserting a string which is based on cutting the given optimal solution and achieves an approximation ratio of 11/6. Moreover, we give some lower bounds on the approximation ratio which can be achieved by algorithms that use such cutting strategie

    On Approximability of Bounded Degree Instances of Selected Optimization Problems

    Get PDF
    In order to cope with the approximation hardness of an underlying optimization problem, it is advantageous to consider specific families of instances with properties that can be exploited to obtain efficient approximation algorithms for the restricted version of the problem with improved performance guarantees. In this thesis, we investigate the approximation complexity of selected NP-hard optimization problems restricted to instances with bounded degree, occurrence or weight parameter. Specifically, we consider the family of dense instances, where typically the average degree is bounded from below by some function of the size of the instance. Complementarily, we examine the family of sparse instances, in which the average degree is bounded from above by some fixed constant. We focus on developing new methods for proving explicit approximation hardness results for general as well as for restricted instances. The fist part of the thesis contributes to the systematic investigation of the VERTEX COVER problem in k-hypergraphs and k-partite k-hypergraphs with density and regularity constraints. We design efficient approximation algorithms for the problems with improved performance guarantees as compared to the general case. On the other hand, we prove the optimality of our approximation upper bounds under the Unique Games Conjecture or a variant. In the second part of the thesis, we study mainly the approximation hardness of restricted instances of selected global optimization problems. We establish improved or in some cases the first inapproximability thresholds for the problems considered in this thesis such as the METRIC DIMENSION problem restricted to graphs with maximum degree 3 and the (1,2)-STEINER TREE problem. We introduce a new reductions method for proving explicit approximation lower bounds for problems that are related to the TRAVELING SALESPERSON (TSP) problem. In particular, we prove the best up to now inapproximability thresholds for the general METRIC TSP problem, the ASYMMETRIC TSP problem, the SHORTEST SUPERSTRING problem, the MAXIMUM TSP problem and TSP problems with bounded metrics

    Approximating (k,)(k,\ell)-center clustering for curves

    Get PDF
    The Euclidean kk-center problem is a classical problem that has been extensively studied in computer science. Given a set G\mathcal{G} of nn points in Euclidean space, the problem is to determine a set C\mathcal{C} of kk centers (not necessarily part of G\mathcal{G}) such that the maximum distance between a point in G\mathcal{G} and its nearest neighbor in C\mathcal{C} is minimized. In this paper we study the corresponding (k,)(k,\ell)-center problem for polygonal curves under the Fr\'echet distance, that is, given a set G\mathcal{G} of nn polygonal curves in Rd\mathbb{R}^d, each of complexity mm, determine a set C\mathcal{C} of kk polygonal curves in Rd\mathbb{R}^d, each of complexity \ell, such that the maximum Fr\'echet distance of a curve in G\mathcal{G} to its closest curve in C\mathcal{C} is minimized. In this paper, we substantially extend and improve the known approximation bounds for curves in dimension 22 and higher. We show that, if \ell is part of the input, then there is no polynomial-time approximation scheme unless P=NP\mathsf{P}=\mathsf{NP}. Our constructions yield different bounds for one and two-dimensional curves and the discrete and continuous Fr\'echet distance. In the case of the discrete Fr\'echet distance on two-dimensional curves, we show hardness of approximation within a factor close to 2.5982.598. This result also holds when k=1k=1, and the NP\mathsf{NP}-hardness extends to the case that =\ell=\infty, i.e., for the problem of computing the minimum-enclosing ball under the Fr\'echet distance. Finally, we observe that a careful adaptation of Gonzalez' algorithm in combination with a curve simplification yields a 33-approximation in any dimension, provided that an optimal simplification can be computed exactly. We conclude that our approximation bounds are close to being tight.Comment: 24 pages; results on minimum-enclosing ball added, additional author added, general revisio

    Approximation algorithms for the shortest common superstring problem

    Get PDF
    AbstractThe object of the shortest common superstring problem (SCS) is to find the shortest possible string that contains every string in a given set as substrings. As the problem is NP-complete, approximation algorithms are of interest. The value of an aproximate solution to SCS is normally taken to be its length, and we seek algorithms that make the length as small as possible. A different measure is given by the sum of the overlaps between consecutive strings in a candidate solution. When considering this measure, the object is to find solutions that make it as large as possible. These two measures offer different ways of viewing the problem. While the two viewpoints are equivalent with respect to optimal solutions, they differ with respect to approximate solutions. We describe several approximation algorithms that produce solutions that are always within a factor of two of optimum with respect to the overlap measure. We also describe an efficient implementation of one of these, using McCreight's compact suffix tree construction algorithm. The worstcase running time is O(m log n) for small alphabets, where m is the sum of the lengths of all the strings in the set and n is the number of strings. For large alphabets, the algorithm can be implemented in O(m log m) time by using Sleator and Tarjan's lexicographic splay tree data structure
    corecore