340,325 research outputs found

    An Improved Distributed Algorithm for Maximal Independent Set

    Full text link
    The Maximal Independent Set (MIS) problem is one of the basics in the study of locality in distributed graph algorithms. This paper presents an extremely simple randomized algorithm providing a near-optimal local complexity for this problem, which incidentally, when combined with some recent techniques, also leads to a near-optimal global complexity. Classical algorithms of Luby [STOC'85] and Alon, Babai and Itai [JALG'86] provide the global complexity guarantee that, with high probability, all nodes terminate after O(logn)O(\log n) rounds. In contrast, our initial focus is on the local complexity, and our main contribution is to provide a very simple algorithm guaranteeing that each particular node vv terminates after O(logdeg(v)+log1/ϵ)O(\log \mathsf{deg}(v)+\log 1/\epsilon) rounds, with probability at least 1ϵ1-\epsilon. The guarantee holds even if the randomness outside 22-hops neighborhood of vv is determined adversarially. This degree-dependency is optimal, due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Interestingly, this local complexity smoothly transitions to a global complexity: by adding techniques of Barenboim, Elkin, Pettie, and Schneider [FOCS'12, arXiv: 1202.1983v3], we get a randomized MIS algorithm with a high probability global complexity of O(logΔ)+2O(loglogn)O(\log \Delta) + 2^{O(\sqrt{\log \log n})}, where Δ\Delta denotes the maximum degree. This improves over the O(log2Δ)+2O(loglogn)O(\log^2 \Delta) + 2^{O(\sqrt{\log \log n})} result of Barenboim et al., and gets close to the Ω(min{logΔ,logn})\Omega(\min\{\log \Delta, \sqrt{\log n}\}) lower bound of Kuhn et al. Corollaries include improved algorithms for MIS in graphs of upper-bounded arboricity, or lower-bounded girth, for Ruling Sets, for MIS in the Local Computation Algorithms (LCA) model, and a faster distributed algorithm for the Lov\'asz Local Lemma

    Round Compression for Parallel Matching Algorithms

    Get PDF
    For over a decade now we have been witnessing the success of {\em massive parallel computation} (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the {\em maximum matching} problem---one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn)O(\log{n}) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. showed that if each machine has n1+Ω(1)n^{1+\Omega(1)} memory, this problem can also be solved 22-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn)O(\log{n}) rounds once we enter the near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that perplexing possibility. That is, we break the above O(logn)O(\log n) round complexity bound even in the case of {\em slightly sublinear} memory per machine. In fact, our improvement here is {\em almost exponential}: we are able to deliver a (2+ϵ)(2+\epsilon)-approximation to maximum matching, for any fixed constant ϵ>0\epsilon>0, in O((loglogn)2)O((\log \log n)^2) rounds

    On combinatorial optimisation in analysis of protein-protein interaction and protein folding networks

    Get PDF
    Abstract: Protein-protein interaction networks and protein folding networks represent prominent research topics at the intersection of bioinformatics and network science. In this paper, we present a study of these networks from combinatorial optimisation point of view. Using a combination of classical heuristics and stochastic optimisation techniques, we were able to identify several interesting combinatorial properties of biological networks of the COSIN project. We obtained optimal or near-optimal solutions to maximum clique and chromatic number problems for these networks. We also explore patterns of both non-overlapping and overlapping cliques in these networks. Optimal or near-optimal solutions to partitioning of these networks into non-overlapping cliques and to maximum independent set problem were discovered. Maximal cliques are explored by enumerative techniques. Domination in these networks is briefly studied, too. Applications and extensions of our findings are discussed

    Partitions of graphs into small and large sets

    Full text link
    Let GG be a graph on nn vertices. We call a subset AA of the vertex set V(G)V(G) \emph{kk-small} if, for every vertex vAv \in A, deg(v)nA+k\deg(v) \le n - |A| + k. A subset BV(G)B \subseteq V(G) is called \emph{kk-large} if, for every vertex uBu \in B, deg(u)Bk1\deg(u) \ge |B| - k - 1. Moreover, we denote by φk(G)\varphi_k(G) the minimum integer tt such that there is a partition of V(G)V(G) into tt kk-small sets, and by Ωk(G)\Omega_k(G) the minimum integer tt such that there is a partition of V(G)V(G) into tt kk-large sets. In this paper, we will show tight connections between kk-small sets, respectively kk-large sets, and the kk-independence number, the clique number and the chromatic number of a graph. We shall develop greedy algorithms to compute in linear time both φk(G)\varphi_k(G) and Ωk(G)\Omega_k(G) and prove various sharp inequalities concerning these parameters, which we will use to obtain refinements of the Caro-Wei Theorem, the Tur\'an Theorem and the Hansen-Zheng Theorem among other things.Comment: 21 page

    Construction of near-optimal vertex clique covering for real-world networks

    Get PDF
    We propose a method based on combining a constructive and a bounding heuristic to solve the vertex clique covering problem (CCP), where the aim is to partition the vertices of a graph into the smallest number of classes, which induce cliques. Searching for the solution to CCP is highly motivated by analysis of social and other real-world networks, applications in graph mining, as well as by the fact that CCP is one of the classical NP-hard problems. Combining the construction and the bounding heuristic helped us not only to find high-quality clique coverings but also to determine that in the domain of real-world networks, many of the obtained solutions are optimal, while the rest of them are near-optimal. In addition, the method has a polynomial time complexity and shows much promise for its practical use. Experimental results are presented for a fairly representative benchmark of real-world data. Our test graphs include extracts of web-based social networks, including some very large ones, several well-known graphs from network science, as well as coappearance networks of literary works' characters from the DIMACS graph coloring benchmark. We also present results for synthetic pseudorandom graphs structured according to the Erdös-Renyi model and Leighton's model
    corecore