18 research outputs found

    Triangles, Long Paths, and Covered Sets

    Get PDF
    In chapter 2, we consider a generalization of the well-known Maker-Breaker triangle game for uniform hypergraphs in which Maker tries to build a triangle by choosing one edge in each round and Breaker tries to prevent her from doing so by choosing q edges in each round. The main result is the analysis of a new Breaker strategy using potential functions, introduced by Glazik and Srivastav (2019). Both bounds are of the order Θ(n3/2) so they are asymptotically optimal. The constant for the lower bound is 2-o(1) and for the upper bound it is 3√2. In chapter 3, we describe another Maker-Breaker game, namely the P3-game in which Maker tries to build a path of length 3. First, we show that the methods of chapter 2 are not applicable in this scenario and give an intuition why that might be the case. Then, we give a more simple counting argument to bound the threshold bias. In chapter 4, we consider the longest path problem which is a classic NP-hard problem that arises in many contexts. Our motivation to investigate this problem in a big-data context was the problem of genome-assembly, where a long path in a graph that is constructed of the reads of a genome potentially represents a long contiguous sequence of the genome. We give a semi-streaming algorithm. Our algorithm delivers results competitive to algorithms that do not have a restriction on the amount of memory. In chapter 5, we investigate the b-SetMultiCover problem, a classic combinatorial problem which generalizes the set cover problem. Using an LP-relaxation and analysis with the bounded differences inequality of C. McDiarmid (1989), we show that there is a strong concentration around the expectation

    Randomisierte Approximation fĂŒr das Matching- und KnotenĂŒberdeckung Problem in Hypergraphen: KomplexitĂ€t und Algorithmen

    Get PDF
    This thesis studies the design and mathematical analysis of randomized approximation algorithms for the hitting set and b-matching problems in hypergraphs. We present a randomized algorithm for the hitting set problem based on linear programming. The analysis of the randomized algorithm rests upon the probabilistic method, more precisely on some concentration inequalities for the sum of independent random variables plus some martingale based inequalities, as the bounded difference inequality, which is a derived from Azuma inequality. In combination with combinatorial arguments we achieve some new results for different instance classes that improve upon the known approximation results for the problem (Krevilevich (1997), Halperin (2001)). We analyze the complexity of the b-matching problem in hypergraphs and obtain two new results. We give a polynomial time reduction from an instance of a suitable problem to an instance of the b-matching problem and prove a non-approximability ratio for the problem in l-uniform hypergraphs. This generalizes the result of Safra et al. (2006) from b=1 to b in O(l/log(l)). Safra et al. showed that the 1-matching problem in l-uniform hypergraphs can not be approximated in polynomial time within a ratio O(l/log(l)), unless P = NP. Moreover, we show that the b-matching problem on l-uniform hypergraphs with bounded vertex degree has no polynomial time approximation scheme PTAS, unless P=NP.Diese Arbeit befasst sich mit dem Entwurf und der mathematischen Analyse von randomisierten Approximationsalgorithmen fĂŒr das Hitting Set Problem und das b-Matching Problem in Hypergraphen. Zuerst prĂ€sentieren wir einen randomisierten Algorithmus fĂŒr das Hitting Set Problem, der auf linearer Programmierung basiert. Mit diesem Verfahren und einer Analyse, die auf der probabilistischen Methode fußt, erreichen wir fĂŒr verschiedene Klassen von Instanzen drei neue ApproximationsgĂŒten, die die bisher bekannten Ergebnisse (Krevilevich [1997], Halperin [2001]) fĂŒr das Problem verbessern. Die Analysen beruhen auf Konzentrationsungleichungen fĂŒr Summen von unabhĂ€ngigen Zufallsvariablen aber auch Martingal-basierten Ungleichungen, wie die aus der Azuma-Ungleichung abgeleitete Bounded Difference-Inequality, in Kombination mit kombinatorischen Argumenten. FĂŒr das b-Matching Problem in Hypergraphen analysieren wir zunĂ€chst seine KomplexitĂ€t und erhalten zwei neue Ergebnisse. Wir geben eine polynomielle Reduktion von einer Instanz eines geeigneten Problems zu einer Instanz des b-Matching-Problems an und zeigen ein Nicht-Approximierbarkeitsresultat fĂŒr das Problem in uniformen Hypergraphen. Dieses Resultat verallgemeinert das Ergebnis von Safra et al. (2006) von b = 1 auf b in O(l/log(l))). Safra et al. zeigten, dass es fĂŒr das 1-Matching Problem in uniformen Hypergraphen unter der Annahme P != NP keinen polynomiellen Approximationsalgorithmus mit einer Ratio O(l/log(l)) gibt. Weiterhin beweisen wir, dass es in uniformen Hypergraphen mit beschrĂ€nktem Knoten-Grad kein PTAS fĂŒr das Problem gibt, es sei denn P = NP

    Derandomizing Concentration Inequalities with dependencies and their combinatorial applications

    Get PDF
    Both in combinatorics and design and analysis of randomized algorithms for combinatorial optimization problems, we often use the famous bounded differences inequality by C. McDiarmid (1989), which is based on the martingale inequality by K. Azuma (1967), to show positive probability of success. In the case of sum of independent random variables, the inequalities of Chernoff (1952) and Hoeffding (1964) can be used and can be efficiently derandomized, i.e. we can construct the required event in deterministic, polynomial time (Srivastav and Stangier 1996). With such an algorithm one can construct the sought combinatorial structure or design an efficient deterministic algorithm from the probabilistic existentce result or the randomized algorithm. The derandomization of C. McDiarmid's bounded differences inequality was an open problem. The main result in Chapter 3 is an efficient derandomization of the bounded differences inequality, with the time required to compute the conditional expectation of the objective function being part of the complexity. The following chapters 4 through 7 demonstrate the generality and power of the derandomization framework developed in Chapter 3. In Chapter 5, we derandomize the Maker's random strategy in the Maker-Breaker subgraph game given by Bednarska and Luczak (2000), which is fundamental for the field, and analyzed with the concentration inequality of Janson, Luczak and Rucinski. But since we use the bounded differences inequality, it is necessary to give a new proof of the existence of subgraphs in G(n,M)-random graphs (Chapter 4). In Chapter 6, we derandomize the two-stage randomized algorithm for the set-multicover problem by El Ouali, Munstermann and Srivastav (2014). In Chapter 7, we show that the algorithm of Bansal, Caprara and Sviridenko (2009) for the multidimensional bin packing problem can be elegantly derandomized with our derandomization framework of bounded differences inequality, while the authors use a potential function based approach, leading to a rather complex analysis. In Chapter 8, we analyze the constrained hypergraph coloring problem given in Ahuja and Srivastav (2002), which generalizes both the property B problem for the non-monochromatic 2-coloring of hypergraphs and the multidimensional bin packing problem using the bounded differences inequality instead of the Lovasz local lemma. We also derandomize the algorithm using our framework. In Chapter 9, we turn to the generalization of the well-known concentration inequality of Hoeffding (1964) by Janson (1994), to sums of random variables, that are not independent, but are partially dependent, or in other words, are independent in certain groups. Assuming the same dependency structure as in Janson (1994), we generalize the well-known concentration inequality of Alon and Spencer (1991). In Chapter 10, we derandomize the inequality of Alon and Spencer. The derandomization of our generalized Alon-Spencer inequality under partial dependencies remains an interesting, open problem

    Algorithms and Certificates for Boolean CSP Refutation: "Smoothed is no harder than Random"

    Full text link
    We present an algorithm for strongly refuting smoothed instances of all Boolean CSPs. The smoothed model is a hybrid between worst and average-case input models, where the input is an arbitrary instance of the CSP with only the negation patterns of the literals re-randomized with some small probability. For an nn-variable smoothed instance of a kk-arity CSP, our algorithm runs in nO(ℓ)n^{O(\ell)} time, and succeeds with high probability in bounding the optimum fraction of satisfiable constraints away from 11, provided that the number of constraints is at least O~(n)(nℓ)k2−1\tilde{O}(n) (\frac{n}{\ell})^{\frac{k}{2} - 1}. This matches, up to polylogarithmic factors in nn, the trade-off between running time and the number of constraints of the state-of-the-art algorithms for refuting fully random instances of CSPs [RRS17]. We also make a surprising new connection between our algorithm and even covers in hypergraphs, which we use to positively resolve Feige's 2008 conjecture, an extremal combinatorics conjecture on the existence of even covers in sufficiently dense hypergraphs that generalizes the well-known Moore bound for the girth of graphs. As a corollary, we show that polynomial-size refutation witnesses exist for arbitrary smoothed CSP instances with number of constraints a polynomial factor below the "spectral threshold" of nk/2n^{k/2}, extending the celebrated result for random 3-SAT of Feige, Kim and Ofek [FKO06]

    Approximating set multi-covers

    Get PDF
    Johnson and Lov\'asz and Stein proved independently that any hypergraph satisfies τ≀(1+ln⁡Δ)τ∗\tau\leq (1+\ln \Delta)\tau^{\ast}, where τ\tau is the transversal number, τ∗\tau^{\ast} is its fractional version, and Δ\Delta denotes the maximum degree. We prove τf≀cτ∗max⁥{ln⁡Δ,f}\tau_f\leq c \tau^{\ast}\max\{\ln \Delta, f\} for the ff-fold transversal number τf\tau_f. Similarly to Johnson, Lov\'asz and Stein, we also show that this bound can be achieved non-probabilistically, using a greedy algorithm. As a combinatorial application, we prove an estimate on how fast τf/f\tau_f/f converges to τ∗\tau^{\ast}. As a geometric application, we obtain an upper bound on the minimal density of an ff-fold covering of the dd-dimensional Euclidean space by translates of any convex body.Comment: THE TITLE CHANGED! This is the final version. 7 page

    Greedy D-Approximation Algorithm for Covering with Arbitrary Constraints and Submodular Cost

    Full text link
    This paper describes a simple greedy D-approximation algorithm for any covering problem whose objective function is submodular and non-decreasing, and whose feasible region can be expressed as the intersection of arbitrary (closed upwards) covering constraints, each of which constrains at most D variables of the problem. (A simple example is Vertex Cover, with D = 2.) The algorithm generalizes previous approximation algorithms for fundamental covering problems and online paging and caching problems

    Approximability of Sparse Integer Programs

    Get PDF
    The main focus of this paper is a pair of new approximation algorithms for certain integer programs. First, for covering integer programs {min cx: Ax >= b, 0 <= x <= d} where A has at most k nonzeroes per row, we give a k-approximation algorithm. (We assume A, b, c, d are nonnegative.) For any k >= 2 and eps>0, if P != NP this ratio cannot be improved to k-1-eps, and under the unique games conjecture this ratio cannot be improved to k-eps. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsack-cover inequalities. Second, for packing integer programs {max cx: Ax <= b, 0 <= x <= d} where A has at most k nonzeroes per column, we give a (2k^2+2)-approximation algorithm. Our approach builds on the iterated LP relaxation framework. In addition, we obtain improved approximations for the second problem when k=2, and for both problems when every A_{ij} is small compared to b_i. Finally, we demonstrate a 17/16-inapproximability for covering integer programs with at most two nonzeroes per column.Comment: Version submitted to Algorithmica special issue on ESA 2009. Previous conference version: http://dx.doi.org/10.1007/978-3-642-04128-0_

    Covering Problems via Structural Approaches

    Get PDF
    The minimum set cover problem is, without question, among the most ubiquitous and well-studied problems in computer science. Its theoretical hardness has been fully characterized--logarithmic approximability has been established, and no sublogarithmic approximation exists unless P=NP. However, the gap between real-world instances and the theoretical worst case is often immense--many covering problems of practical relevance admit much better approximations, or even solvability in polynomial time. Simple combinatorial or geometric structure can often be exploited to obtain improved algorithms on a problem-by-problem basis, but there is no general method of determining the extent to which this is possible. In this thesis, we aim to shed light on the relationship between the structure and the hardness of covering problems. We discuss several measures of structural complexity of set cover instances and prove new algorithmic and hardness results linking the approximability of a set cover problem to its underlying structure. In particular, we provide: - An APX-hardness proof for a wide family of problems that encode a simple covering problem known as Special-3SC. - A class of polynomial dynamic programming algorithms for a group of weighted geometric set cover problems having simple structure. - A simplified quasi-uniform sampling algorithm that yields improved approximations for weighted covering problems having low cell complexity or geometric union complexity. - Applications of the above to various capacitated covering problems via linear programming strengthening and rounding. In total, we obtain new results for dozens of covering problems exhibiting geometric or combinatorial structure. We tabulate these problems and classify them according to their approximability
    corecore