28 research outputs found

    Derandomizing Concentration Inequalities with dependencies and their combinatorial applications

    Get PDF
    Both in combinatorics and design and analysis of randomized algorithms for combinatorial optimization problems, we often use the famous bounded differences inequality by C. McDiarmid (1989), which is based on the martingale inequality by K. Azuma (1967), to show positive probability of success. In the case of sum of independent random variables, the inequalities of Chernoff (1952) and Hoeffding (1964) can be used and can be efficiently derandomized, i.e. we can construct the required event in deterministic, polynomial time (Srivastav and Stangier 1996). With such an algorithm one can construct the sought combinatorial structure or design an efficient deterministic algorithm from the probabilistic existentce result or the randomized algorithm. The derandomization of C. McDiarmid's bounded differences inequality was an open problem. The main result in Chapter 3 is an efficient derandomization of the bounded differences inequality, with the time required to compute the conditional expectation of the objective function being part of the complexity. The following chapters 4 through 7 demonstrate the generality and power of the derandomization framework developed in Chapter 3. In Chapter 5, we derandomize the Maker's random strategy in the Maker-Breaker subgraph game given by Bednarska and Luczak (2000), which is fundamental for the field, and analyzed with the concentration inequality of Janson, Luczak and Rucinski. But since we use the bounded differences inequality, it is necessary to give a new proof of the existence of subgraphs in G(n,M)-random graphs (Chapter 4). In Chapter 6, we derandomize the two-stage randomized algorithm for the set-multicover problem by El Ouali, Munstermann and Srivastav (2014). In Chapter 7, we show that the algorithm of Bansal, Caprara and Sviridenko (2009) for the multidimensional bin packing problem can be elegantly derandomized with our derandomization framework of bounded differences inequality, while the authors use a potential function based approach, leading to a rather complex analysis. In Chapter 8, we analyze the constrained hypergraph coloring problem given in Ahuja and Srivastav (2002), which generalizes both the property B problem for the non-monochromatic 2-coloring of hypergraphs and the multidimensional bin packing problem using the bounded differences inequality instead of the Lovasz local lemma. We also derandomize the algorithm using our framework. In Chapter 9, we turn to the generalization of the well-known concentration inequality of Hoeffding (1964) by Janson (1994), to sums of random variables, that are not independent, but are partially dependent, or in other words, are independent in certain groups. Assuming the same dependency structure as in Janson (1994), we generalize the well-known concentration inequality of Alon and Spencer (1991). In Chapter 10, we derandomize the inequality of Alon and Spencer. The derandomization of our generalized Alon-Spencer inequality under partial dependencies remains an interesting, open problem

    Distributed local approximation algorithms for maximum matching in graphs and hypergraphs

    Full text link
    We describe approximation algorithms in Linial's classic LOCAL model of distributed computing to find maximum-weight matchings in a hypergraph of rank rr. Our main result is a deterministic algorithm to generate a matching which is an O(r)O(r)-approximation to the maximum weight matching, running in O~(rlogΔ+log2Δ+logn)\tilde O(r \log \Delta + \log^2 \Delta + \log^* n) rounds. (Here, the O~()\tilde O() notations hides polyloglog Δ\text{polyloglog } \Delta and polylog r\text{polylog } r factors). This is based on a number of new derandomization techniques extending methods of Ghaffari, Harris & Kuhn (2017). As a main application, we obtain nearly-optimal algorithms for the long-studied problem of maximum-weight graph matching. Specifically, we get a (1+ϵ)(1+\epsilon) approximation algorithm using O~(logΔ/ϵ3+polylog(1/ϵ,loglogn))\tilde O(\log \Delta / \epsilon^3 + \text{polylog}(1/\epsilon, \log \log n)) randomized time and O~(log2Δ/ϵ4+logn/ϵ)\tilde O(\log^2 \Delta / \epsilon^4 + \log^*n / \epsilon) deterministic time. The second application is a faster algorithm for hypergraph maximal matching, a versatile subroutine introduced in Ghaffari et al. (2017) for a variety of local graph algorithms. This gives an algorithm for (2Δ1)(2 \Delta - 1)-edge-list coloring in O~(log2Δlogn)\tilde O(\log^2 \Delta \log n) rounds deterministically or O~((loglogn)3)\tilde O( (\log \log n)^3 ) rounds randomly. Another consequence (with additional optimizations) is an algorithm which generates an edge-orientation with out-degree at most (1+ϵ)λ\lceil (1+\epsilon) \lambda \rceil for a graph of arboricity λ\lambda; for fixed ϵ\epsilon this runs in O~(log6n)\tilde O(\log^6 n) rounds deterministically or O~(log3n)\tilde O(\log^3 n ) rounds randomly

    On Derandomized Approximation Algorithms

    Get PDF
    With the design of powerful randomized algorithms the transformation of a randomized algorithm or probabilistic existence result for combinatorial problems into an efficient deterministic algorithm (called derandomization) became an important issue in algorithmic discrete mathematics. In the last years several interesting examples of derandomization have been published, like discrepancy in hypergraph colouring, packing integer programs and an algorithmic version of the Lovász-Local-Lemma. In this paper the derandomization method of conditional probabilities of Raghavan/Spencer is extended using discrete martingales. As a main result pessimistic estimators are constructed for combinatorial approximation problems involving non-linear objective functions with bounded martingale differences. The theory gives polynomial-time algorithms for the linear and quadratic lattice approximation problem and a quadratic variant of the matrix balancing problem extending results of Spencer, Beck/Fiala and Raghavan. Finally a probabilistic existence result of Erdös on the average graph bisection is transformed into a deterministic algorithm

    Deterministic parallel algorithms for bilinear objective functions

    Full text link
    Many randomized algorithms can be derandomized efficiently using either the method of conditional expectations or probability spaces with low independence. A series of papers, beginning with work by Luby (1988), showed that in many cases these techniques can be combined to give deterministic parallel (NC) algorithms for a variety of combinatorial optimization problems, with low time- and processor-complexity. We extend and generalize a technique of Luby for efficiently handling bilinear objective functions. One noteworthy application is an NC algorithm for maximal independent set. On a graph GG with mm edges and nn vertices, this takes O~(log2n)\tilde O(\log^2 n) time and (m+n)no(1)(m + n) n^{o(1)} processors, nearly matching the best randomized parallel algorithms. Other applications include reduced processor counts for algorithms of Berger (1997) for maximum acyclic subgraph and Gale-Berlekamp switching games. This bilinear factorization also gives better algorithms for problems involving discrepancy. An important application of this is to automata-fooling probability spaces, which are the basis of a notable derandomization technique of Sivakumar (2002). Our method leads to large reduction in processor complexity for a number of derandomization algorithms based on automata-fooling, including set discrepancy and the Johnson-Lindenstrauss Lemma

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF

    Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)

    Get PDF
    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..
    corecore