113,275 research outputs found

    Constructions of Large Graphs on Surfaces

    Full text link
    We consider the degree/diameter problem for graphs embedded in a surface, namely, given a surface Σ\Sigma and integers Δ\Delta and kk, determine the maximum order N(Δ,k,Σ)N(\Delta,k,\Sigma) of a graph embeddable in Σ\Sigma with maximum degree Δ\Delta and diameter kk. We introduce a number of constructions which produce many new largest known planar and toroidal graphs. We record all these graphs in the available tables of largest known graphs. Given a surface Σ\Sigma of Euler genus gg and an odd diameter kk, the current best asymptotic lower bound for N(Δ,k,Σ)N(\Delta,k,\Sigma) is given by 38gΔk/2.\sqrt{\frac{3}{8}g}\Delta^{\lfloor k/2\rfloor}. Our constructions produce new graphs of order \begin{cases}6\Delta^{\lfloor k/2\rfloor}& \text{if $\Sigma$ is the Klein bottle}\\ \(\frac{7}{2}+\sqrt{6g+\frac{1}{4}}\)\Delta^{\lfloor k/2\rfloor}& \text{otherwise,}\end{cases} thus improving the former value by a factor of 4.Comment: 15 pages, 7 figure

    Space-Efficient Routing Tables for Almost All Networks and the Incompressibility Method

    Get PDF
    We use the incompressibility method based on Kolmogorov complexity to determine the total number of bits of routing information for almost all network topologies. In most models for routing, for almost all labeled graphs Θ(n2)\Theta (n^2) bits are necessary and sufficient for shortest path routing. By `almost all graphs' we mean the Kolmogorov random graphs which constitute a fraction of 11/nc1-1/n^c of all graphs on nn nodes, where c>0c > 0 is an arbitrary fixed constant. There is a model for which the average case lower bound rises to Ω(n2logn)\Omega(n^2 \log n) and another model where the average case upper bound drops to O(nlog2n)O(n \log^2 n). This clearly exposes the sensitivity of such bounds to the model under consideration. If paths have to be short, but need not be shortest (if the stretch factor may be larger than 1), then much less space is needed on average, even in the more demanding models. Full-information routing requires Θ(n3)\Theta (n^3) bits on average. For worst-case static networks we prove a Ω(n2logn)\Omega(n^2 \log n) lower bound for shortest path routing and all stretch factors <2<2 in some networks where free relabeling is not allowed.Comment: 19 pages, Latex, 1 table, 1 figure; SIAM J. Comput., To appea

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Highly intensive data dissemination in complex networks

    Full text link
    This paper presents a study on data dissemination in unstructured Peer-to-Peer (P2P) network overlays. The absence of a structure in unstructured overlays eases the network management, at the cost of non-optimal mechanisms to spread messages in the network. Thus, dissemination schemes must be employed that allow covering a large portion of the network with a high probability (e.g.~gossip based approaches). We identify principal metrics, provide a theoretical model and perform the assessment evaluation using a high performance simulator that is based on a parallel and distributed architecture. A main point of this study is that our simulation model considers implementation technical details, such as the use of caching and Time To Live (TTL) in message dissemination, that are usually neglected in simulations, due to the additional overhead they cause. Outcomes confirm that these technical details have an important influence on the performance of dissemination schemes and that the studied schemes are quite effective to spread information in P2P overlay networks, whatever their topology. Moreover, the practical usage of such dissemination mechanisms requires a fine tuning of many parameters, the choice between different network topologies and the assessment of behaviors such as free riding. All this can be done only using efficient simulation tools to support both the network design phase and, in some cases, at runtime
    corecore