10,356 research outputs found

    Connectivity of Random Annulus Graphs and the Geometric Block Model

    Get PDF
    We provide new connectivity results for {\em vertex-random graphs} or {\em random annulus graphs} which are significant generalizations of random geometric graphs. Random geometric graphs (RGG) are one of the most basic models of random graphs for spatial networks proposed by Gilbert in 1961, shortly after the introduction of the Erd\H{o}s-R\'{en}yi random graphs. They resemble social networks in many ways (e.g. by spontaneously creating cluster of nodes with high modularity). The connectivity properties of RGG have been studied since its introduction, and analyzing them has been significantly harder than their Erd\H{o}s-R\'{en}yi counterparts due to correlated edge formation. Our next contribution is in using the connectivity of random annulus graphs to provide necessary and sufficient conditions for efficient recovery of communities for {\em the geometric block model} (GBM). The GBM is a probabilistic model for community detection defined over an RGG in a similar spirit as the popular {\em stochastic block model}, which is defined over an Erd\H{o}s-R\'{en}yi random graph. The geometric block model inherits the transitivity properties of RGGs and thus models communities better than a stochastic block model. However, analyzing them requires fresh perspectives as all prior tools fail due to correlation in edge formation. We provide a simple and efficient algorithm that can recover communities in GBM exactly with high probability in the regime of connectivity

    Hardness and Algorithms for Rainbow Connectivity

    Get PDF
    An edge-colored graph G is rainbow connected if any two vertices are connected by a path whose edges have distinct colors. The rainbow connectivity of a connected graph G, denoted rc(G), is the smallest number of colors that are needed in order to make G rainbow connected. In addition to being a natural combinatorial problem, the rainbow connectivity problem is motivated by applications in cellular networks. In this paper we give the first proof that computing rc(G) is NP-Hard. In fact, we prove that it is already NP-Complete to decide if rc(G) = 2, and also that it is NP-Complete to decide whether a given edge-colored (with an unbounded number of colors) graph is rainbow connected. On the positive side, we prove that for every ϵ\epsilon > 0, a connected graph with minimum degree at least ϵn\epsilon n has bounded rainbow connectivity, where the bound depends only on ϵ\epsilon, and the corresponding coloring can be constructed in polynomial time. Additional non-trivial upper bounds, as well as open problems and conjectures are also pre sented

    Typical solution time for a vertex-covering algorithm on finite-connectivity random graphs

    Full text link
    In this letter, we analytically describe the typical solution time needed by a backtracking algorithm to solve the vertex-cover problem on finite-connectivity random graphs. We find two different transitions: The first one is algorithm-dependent and marks the dynamical transition from linear to exponential solution times. The second one gives the maximum computational complexity, and is found exactly at the threshold where the system undergoes an algorithm-independent phase transition in its solvability. Analytical results are corroborated by numerical simulations.Comment: 4 pages, 2 figures, to appear in Phys. Rev. Let

    Sampling-based Algorithms for Optimal Motion Planning

    Get PDF
    During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e., such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.Comment: 76 pages, 26 figures, to appear in International Journal of Robotics Researc

    Substructure Discovery Using Minimum Description Length and Background Knowledge

    Full text link
    The ability to identify interesting and repetitive substructures is an essential component to discovering knowledge in structural data. We describe a new version of our SUBDUE substructure discovery system based on the minimum description length principle. The SUBDUE system discovers substructures that compress the original data and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. SUBDUE uses a computationally-bounded inexact graph match that identifies similar, but not identical, instances of a substructure and finds an approximate measure of closeness of two substructures when under computational constraints. In addition to the minimum description length principle, other background knowledge can be used by SUBDUE to guide the search towards more appropriate substructures. Experiments in a variety of domains demonstrate SUBDUE's ability to find substructures capable of compressing the original data and to discover structural concepts important to the domain. Description of Online Appendix: This is a compressed tar file containing the SUBDUE discovery system, written in C. The program accepts as input databases represented in graph form, and will output discovered substructures with their corresponding value.Comment: See http://www.jair.org/ for an online appendix and other files accompanying this articl

    Colouring random graphs and maximising local diversity

    Get PDF
    We study a variation of the graph colouring problem on random graphs of finite average connectivity. Given the number of colours, we aim to maximise the number of different colours at neighbouring vertices (i.e. one edge distance) of any vertex. Two efficient algorithms, belief propagation and Walksat are adapted to carry out this task. We present experimental results based on two types of random graphs for different system sizes and identify the critical value of the connectivity for the algorithms to find a perfect solution. The problem and the suggested algorithms have practical relevance since various applications, such as distributed storage, can be mapped onto this problem.Comment: 10 pages, 10 figure

    Threshold phenomena in random graphs

    Get PDF
    In the 1950s, random graphs appeared for the first time in a result of the prolific hungarian mathematician Pál Erd\H{o}s. Since then, interest in random graph theory has only grown up until now. In its first stages, the basis of its theory were set, while they were mainly used in probability and combinatorics theory. However, with the new century and the boom of technologies like the World Wide Web, random graphs are even more important since they are extremely useful to handle problems in fields like network and communication theory. Because of this fact, nowadays random graphs are widely studied by the mathematical community around the world and new promising results have been recently achieved, showing an exciting future for this field. In this bachelor thesis, we focus our study on the threshold phenomena for graph properties within random graphs

    Improved Parallel Algorithms for Spanners and Hopsets

    Full text link
    We use exponential start time clustering to design faster and more work-efficient parallel graph algorithms involving distances. Previous algorithms usually rely on graph decomposition routines with strict restrictions on the diameters of the decomposed pieces. We weaken these bounds in favor of stronger local probabilistic guarantees. This allows more direct analyses of the overall process, giving: * Linear work parallel algorithms that construct spanners with O(k)O(k) stretch and size O(n1+1/k)O(n^{1+1/k}) in unweighted graphs, and size O(n1+1/klogk)O(n^{1+1/k} \log k) in weighted graphs. * Hopsets that lead to the first parallel algorithm for approximating shortest paths in undirected graphs with O(m  polylog  n)O(m\;\mathrm{polylog}\;n) work
    corecore