5 research outputs found

    15th Scandinavian Symposium and Workshops on Algorithm Theory: SWAT 2016, June 22-24, 2016, Reykjavik, Iceland

    Get PDF

    Succinct Data Structures for Parameterized Pattern Matching and Related Problems

    Get PDF
    Let T be a fixed text-string of length n and P be a varying pattern-string of length |P| \u3c= n. Both T and P contain characters from a totally ordered alphabet Sigma of size sigma \u3c= n. Suffix tree is the ubiquitous data structure for answering a pattern matching query: report all the positions i in T such that T[i + k - 1] = P[k], 1 \u3c= k \u3c= |P|. Compressed data structures support pattern matching queries, using much lesser space than the suffix tree, mainly by relying on a crucial property of the leaves in the tree. Unfortunately, in many suffix tree variants (such as parameterized suffix tree, order-preserving suffix tree, and 2-dimensional suffix tree), this property does not hold. Consequently, compressed representations of these suffix tree variants have been elusive. We present the first compressed data structures for two important variants of the pattern matching problem: (1) Parameterized Matching -- report a position i in T if T[i + k - 1] = f(P[k]), 1 \u3c= k \u3c= |P|, for a one-to-one function f that renames the characters in P to the characters in T[i,i+|P|-1], and (2) Order-preserving Matching -- report a position i in T if T[i + j - 1] and T[i + k -1] have the same relative order as that of P[j] and P[k], 1 \u3c= j \u3c k \u3c= |P|. For each of these two problems, the existing suffix tree variant requires O(n*log n) bits of space and answers a query in O(|P|*log sigma + occ) time, where occ is the number of starting positions where a match exists. We present data structures that require O(n*log sigma) bits of space and answer a query in O((|P|+occ) poly(log n)) time. As a byproduct, we obtain compressed data structures for a few other variants, as well as introduce two new techniques (of independent interest) for designing compressed data structures for pattern matching

    Algorithms for Integer Programming and Allocation

    Get PDF
    The first part of the thesis contains pseudo-polynomial algorithms for integer linear programs (ILP). When certain parameters of an ILP are fixed, that is, they are treated as constants in the running time, it is possible to obtain algorithms with a running time that is pseudo-polynomial in the entries of the ILP’s matrix. We present a tight pseudo-polynomial running time for ILPs with a constant number of constraints. Furthermore, we study an extension of this model to MILPs (linear programs that contain both fractional and integer variables). Then we move to n-fold ILPs, a class of ILPs with block structured matrices. We present the first algorithm for n-folds, which is near-linear in the dimensions of the ILP. The second part is about scheduling in non-identical machine models, more precisely, restricted allocation problems. Here a set of jobs has to be allocated to a set of machines. However, every job has a subset of machines and may only be assigned to a machine from this subset. We consider the objectives of minimizing the makespan or maximizing the minimum load. We study the integrality gap of a particularly strong linear programming relaxation, the configuration LP, for variations of this problem. The integrality gap can be seen as a measure of strength of an LP relaxation. A local search technique can be used to bound this value. However, the proofs are generally non-constructive, i.e., they do not give an efficient approximation algorithm right away. We derive better upper bounds on the integrality gap of the problems Restricted Assignment, Restricted Santa Claus, and Graph Balancing. Furthermore, we give the first (constructive) quasi-polynomial time approximation algorithm for Restricted Assignment with an approximation ratio strictly less than 2.Der erste Teil der Thesis umfasst pseudopolynomielle Algorithmen fĂŒr ganzzahlige lineare Programme (ILP). Wenn bestimmte Parameter eines ILPs fixiert sind, d.h. sie werden in der Laufzeit als Konstanten betrachtet, dann ist es möglich Algorithmen zu entwerfen, deren Laufzeit pseudopolynomiell in dem grĂ¶ĂŸten absoluten Wert eines Eintrags der Matrix des ILPs ist. Ein Ergebnis, das wir prĂ€sentieren, ist eine scharfe Schranke fĂŒr die pseudopolynomielle Laufzeit, die nötig ist um ein ILP mit konstant vielen Bedingungen zu lösen. Danach befassen wir uns mit n-fold ILPs, eine Klasse von ILPs, deren matrix eine Blockstruktur besitzt. Wir geben den ersten Algorithmus fĂŒr n-folds an, dessen Laufzeit gleichzeitig nahezu linear in der Dimension des ILPs ist. Der zweite Teil handelt von nicht-identischen (heterogenen) Maschinen Modellen, genauer gesagt restricted allocation problems. Hier soll eine Menge von Jobs auf eine Menge von Maschinen verteilt werden. Jeder Job darf aber nur auf bestimmte Maschinen zugewiesen werden. Wir betrachten als Zielfunktionen sowohl die Minimierung des Makespans als auch die Maximierung der minimalen Last einer Maschine. Wir untersuchen den integrality gap einer besonders starken LP Relaxierung, dem Konfigurations LP, fĂŒr Variationen dieses Problems. Der integrality gap kann als Maß fĂŒr die StĂ€rke einer LP Relaxierung gesehen werden. Über ein Argument mittels einer lokalen Suche wird dieser Wert beschrĂ€nkt. Jedoch sind die Beweise typischerweise nicht konstruktiv, d.h. sie implizieren nicht direkt effiziente Approximationsalgorithmen. Wir beweisen neue obere Schranken an den integrality gap fĂŒr die Probleme Restricted Assignment, Restricted Santa Claus und Graph Balancing. Desweiteren prĂ€sentieren wir den ersten (konstruktiven) Quasipolynomialzeit Approximationsalgorithmus fĂŒr das Restricted Assignment Problem mit Approximationsrate echt kleiner als 2

    Notes on Randomized Algorithms

    Full text link
    Lecture notes for the Yale Computer Science course CPSC 469/569 Randomized Algorithms. Suitable for use as a supplementary text for an introductory graduate or advanced undergraduate course on randomized algorithms. Discusses tools from probability theory, including random variables and expectations, union bound arguments, concentration bounds, applications of martingales and Markov chains, and the Lov\'asz Local Lemma. Algorithmic topics include analysis of classic randomized algorithms such as Quicksort and Hoare's FIND, randomized tree data structures, hashing, Markov chain Monte Carlo sampling, randomized approximate counting, derandomization, quantum computing, and some examples of randomized distributed algorithms
    corecore