174 research outputs found

    Algorithms to Approximate Column-Sparse Packing Problems

    Full text link
    Column-sparse packing problems arise in several contexts in both deterministic and stochastic discrete optimization. We present two unifying ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain improved approximation algorithms for some well-known families of such problems. As three main examples, we attain the integrality gap, up to lower-order terms, for known LP relaxations for k-column sparse packing integer programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set packing (Bansal et al., Algorithmica, 2012), and go "half the remaining distance" to optimal for a major integrality-gap conjecture of Furedi, Kahn and Seymour on hypergraph matching (Combinatorica, 1993).Comment: Extended abstract appeared in SODA 2018. Full version in ACM Transactions of Algorithm

    Ramsey games with giants

    Get PDF
    The classical result in the theory of random graphs, proved by Erdos and Renyi in 1960, concerns the threshold for the appearance of the giant component in the random graph process. We consider a variant of this problem, with a Ramsey flavor. Now, each random edge that arrives in the sequence of rounds must be colored with one of R colors. The goal can be either to create a giant component in every color class, or alternatively, to avoid it in every color. One can analyze the offline or online setting for this problem. In this paper, we consider all these variants and provide nontrivial upper and lower bounds; in certain cases (like online avoidance) the obtained bounds are asymptotically tight.Comment: 29 pages; minor revision

    Strong Robustness of Randomized Rumor Spreading Protocols

    Full text link
    Randomized rumor spreading is a classical protocol to disseminate information across a network. At SODA 2008, a quasirandom version of this protocol was proposed and competitive bounds for its run-time were proven. This prompts the question: to what extent does the quasirandom protocol inherit the second principal advantage of randomized rumor spreading, namely robustness against transmission failures? In this paper, we present a result precise up to (1±o(1))(1 \pm o(1)) factors. We limit ourselves to the network in which every two vertices are connected by a direct link. Run-times accurate to their leading constants are unknown for all other non-trivial networks. We show that if each transmission reaches its destination with a probability of p(0,1]p \in (0,1], after (1+\e)(\frac{1}{\log_2(1+p)}\log_2n+\frac{1}{p}\ln n) rounds the quasirandom protocol has informed all nn nodes in the network with probability at least 1-n^{-p\e/40}. Note that this is faster than the intuitively natural 1/p1/p factor increase over the run-time of approximately log2n+lnn\log_2 n + \ln n for the non-corrupted case. We also provide a corresponding lower bound for the classical model. This demonstrates that the quasirandom model is at least as robust as the fully random model despite the greatly reduced degree of independent randomness.Comment: Accepted for publication in "Discrete Applied Mathematics". A short version appeared in the proceedings of the 20th International Symposium on Algorithms and Computation (ISAAC 2009). Minor typos fixed in the second version. Proofs of Lemma 11 and Theorem 12 fixed in the third version. Proof of Lemma 8 fixed in the fourth versio

    Concentration of Submodular Functions Under Negative Dependence

    Full text link
    We study the question of whether submodular functions of random variables satisfying various notions of negative dependence satisfy Chernoff-like concentration inequalities. We prove such a concentration inequality for the lower tail when the random variables satisfy negative association or negative regression, resolving an open problem raised in (\citet{approx/QiuS22}). Previous work showed such concentration results for random variables that come from specific dependent-rounding algorithms (\citet{focs/ChekuriVZ10,soda/HarveyO14}). We discuss some applications of our results to combinatorial optimization and beyond.Comment: 12 page

    Derandomizing Concentration Inequalities with dependencies and their combinatorial applications

    Get PDF
    Both in combinatorics and design and analysis of randomized algorithms for combinatorial optimization problems, we often use the famous bounded differences inequality by C. McDiarmid (1989), which is based on the martingale inequality by K. Azuma (1967), to show positive probability of success. In the case of sum of independent random variables, the inequalities of Chernoff (1952) and Hoeffding (1964) can be used and can be efficiently derandomized, i.e. we can construct the required event in deterministic, polynomial time (Srivastav and Stangier 1996). With such an algorithm one can construct the sought combinatorial structure or design an efficient deterministic algorithm from the probabilistic existentce result or the randomized algorithm. The derandomization of C. McDiarmid's bounded differences inequality was an open problem. The main result in Chapter 3 is an efficient derandomization of the bounded differences inequality, with the time required to compute the conditional expectation of the objective function being part of the complexity. The following chapters 4 through 7 demonstrate the generality and power of the derandomization framework developed in Chapter 3. In Chapter 5, we derandomize the Maker's random strategy in the Maker-Breaker subgraph game given by Bednarska and Luczak (2000), which is fundamental for the field, and analyzed with the concentration inequality of Janson, Luczak and Rucinski. But since we use the bounded differences inequality, it is necessary to give a new proof of the existence of subgraphs in G(n,M)-random graphs (Chapter 4). In Chapter 6, we derandomize the two-stage randomized algorithm for the set-multicover problem by El Ouali, Munstermann and Srivastav (2014). In Chapter 7, we show that the algorithm of Bansal, Caprara and Sviridenko (2009) for the multidimensional bin packing problem can be elegantly derandomized with our derandomization framework of bounded differences inequality, while the authors use a potential function based approach, leading to a rather complex analysis. In Chapter 8, we analyze the constrained hypergraph coloring problem given in Ahuja and Srivastav (2002), which generalizes both the property B problem for the non-monochromatic 2-coloring of hypergraphs and the multidimensional bin packing problem using the bounded differences inequality instead of the Lovasz local lemma. We also derandomize the algorithm using our framework. In Chapter 9, we turn to the generalization of the well-known concentration inequality of Hoeffding (1964) by Janson (1994), to sums of random variables, that are not independent, but are partially dependent, or in other words, are independent in certain groups. Assuming the same dependency structure as in Janson (1994), we generalize the well-known concentration inequality of Alon and Spencer (1991). In Chapter 10, we derandomize the inequality of Alon and Spencer. The derandomization of our generalized Alon-Spencer inequality under partial dependencies remains an interesting, open problem

    Local Multicoloring Algorithms: Computing a Nearly-Optimal TDMA Schedule in Constant Time

    Get PDF
    The described multicoloring problem has direct applications in the context of wireless ad hoc and sensor networks. In order to coordinate the access to the shared wireless medium, the nodes of such a network need to employ some medium access control (MAC) protocol. Typical MAC protocols control the access to the shared channel by time (TDMA), frequency (FDMA), or code division multiple access (CDMA) schemes. Many channel access schemes assign a fixed set of time slots, frequencies, or (orthogonal) codes to the nodes of a network such that nodes that interfere with each other receive disjoint sets of time slots, frequencies, or code sets. Finding a valid assignment of time slots, frequencies, or codes hence directly corresponds to computing a multicoloring of a graph GG. The scarcity of bandwidth, energy, and computing resources in ad hoc and sensor networks, as well as the often highly dynamic nature of these networks require that the multicoloring can be computed based on as little and as local information as possible
    corecore