94,075 research outputs found

    Search in Power-Law Networks

    Full text link
    Many communication and social networks have power-law link distributions, containing a few nodes which have a very high degree and many with low degree. The high connectivity nodes play the important role of hubs in communication and networking, a fact which can be exploited when designing efficient search algorithms. We introduce a number of local search strategies which utilize high degree nodes in power-law graphs and which have costs which scale sub-linearly with the size of the graph. We also demonstrate the utility of these strategies on the Gnutella peer-to-peer network.Comment: 17 pages, 14 figure

    Optimization of transport protocols with path-length constraints in complex networks

    Get PDF
    We propose a protocol optimization technique that is applicable to both weighted or unweighted graphs. Our aim is to explore by how much a small variation around the Shortest Path or Optimal Path protocols can enhance protocol performance. Such an optimization strategy can be necessary because even though some protocols can achieve very high traffic tolerance levels, this is commonly done by enlarging the path-lengths, which may jeopardize scalability. We use ideas borrowed from Extremal Optimization to guide our algorithm, which proves to be an effective technique. Our method exploits the degeneracy of the paths or their close-weight alternatives, which significantly improves the scalability of the protocols in comparison to Shortest Paths or Optimal Paths protocols, keeping at the same time almost intact the length or weight of the paths. This characteristic ensures that the optimized routing protocols are composed of paths that are quick to traverse, avoiding negative effects in data communication due to path-length increases that can become specially relevant when information losses are present.Comment: 8 pages, 8 figure

    Local Search in Unstructured Networks

    Full text link
    We review a number of message-passing algorithms that can be used to search through power-law networks. Most of these algorithms are meant to be improvements for peer-to-peer file sharing systems, and some may also shed some light on how unstructured social networks with certain topologies might function relatively efficiently with local information. Like the networks that they are designed for, these algorithms are completely decentralized, and they exploit the power-law link distribution in the node degree. We demonstrate that some of these search algorithms can work well on real Gnutella networks, scale sub-linearly with the number of nodes, and may help reduce the network search traffic that tends to cripple such networks.Comment: v2 includes minor revisions: corrections to Fig. 8's caption and references. 23 pages, 10 figures, a review of local search strategies in unstructured networks, a contribution to `Handbook of Graphs and Networks: From the Genome to the Internet', eds. S. Bornholdt and H.G. Schuster (Wiley-VCH, Berlin, 2002), to be publishe

    Transforming fixed-length self-avoiding walks into radial SLE_8/3

    Full text link
    We conjecture a relationship between the scaling limit of the fixed-length ensemble of self-avoiding walks in the upper half plane and radial SLE with kappa=8/3 in this half plane from 0 to i. The relationship is that if we take a curve from the fixed-length scaling limit of the SAW, weight it by a suitable power of the distance to the endpoint of the curve and then apply the conformal map of the half plane that takes the endpoint to i, then we get the same probability measure on curves as radial SLE. In addition to a non-rigorous derivation of this conjecture, we support it with Monte Carlo simulations of the SAW. Using the conjectured relationship between the SAW and radial SLE, our simulations give estimates for both the interior and boundary scaling exponents. The values we obtain are within a few hundredths of a percent of the conjectured values

    Critical Exponents, Hyperscaling and Universal Amplitude Ratios for Two- and Three-Dimensional Self-Avoiding Walks

    Get PDF
    We make a high-precision Monte Carlo study of two- and three-dimensional self-avoiding walks (SAWs) of length up to 80000 steps, using the pivot algorithm and the Karp-Luby algorithm. We study the critical exponents ν\nu and 2Δ4γ2\Delta_4 -\gamma as well as several universal amplitude ratios; in particular, we make an extremely sensitive test of the hyperscaling relation dν=2Δ4γd\nu = 2\Delta_4 -\gamma. In two dimensions, we confirm the predicted exponent ν=3/4\nu = 3/4 and the hyperscaling relation; we estimate the universal ratios  / =0.14026±0.00007\ / \ = 0.14026 \pm 0.00007,  / =0.43961±0.00034\ / \ = 0.43961 \pm 0.00034 and Ψ=0.66296±0.00043\Psi^* = 0.66296 \pm 0.00043 (68\% confidence limits). In three dimensions, we estimate ν=0.5877±0.0006\nu = 0.5877 \pm 0.0006 with a correction-to-scaling exponent Δ1=0.56±0.03\Delta_1 = 0.56 \pm 0.03 (subjective 68\% confidence limits). This value for ν\nu agrees excellently with the field-theoretic renormalization-group prediction, but there is some discrepancy for Δ1\Delta_1. Earlier Monte Carlo estimates of ν\nu, which were  ⁣0.592\approx\! 0.592, are now seen to be biased by corrections to scaling. We estimate the universal ratios  / =0.1599±0.0002\ / \ = 0.1599 \pm 0.0002 and Ψ=0.2471±0.0003\Psi^* = 0.2471 \pm 0.0003; since Ψ>0\Psi^* > 0, hyperscaling holds. The approach to Ψ\Psi^* is from above, contrary to the prediction of the two-parameter renormalization-group theory. We critically reexamine this theory, and explain where the error lies.Comment: 87 pages including 12 figures, 1029558 bytes Postscript (NYU-TH-94/09/01

    A parallel butterfly algorithm

    Full text link
    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform \int K(x,y) g(y) dy at large numbers of target points when the kernel, K(x,y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(N^d) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r^2 N^d log N). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of \alpha and per-process inverse bandwidth of \beta, executes in at most O(r^2 N^d/p log N + \beta r N^d/p + \alpha)log p) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x,y)=exp(i \Phi(x,y)), where \Phi(x,y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms and an analogue of a 3D generalized Radon transform were respectively observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively.Comment: To appear in SIAM Journal on Scientific Computin

    Sparse Allreduce: Efficient Scalable Communication for Power-Law Data

    Full text link
    Many large datasets exhibit power-law statistics: The web graph, social networks, text data, click through data etc. Their adjacency graphs are termed natural graphs, and are known to be difficult to partition. As a consequence most distributed algorithms on these graphs are communication intensive. Many algorithms on natural graphs involve an Allreduce: a sum or average of partitioned data which is then shared back to the cluster nodes. Examples include PageRank, spectral partitioning, and many machine learning algorithms including regression, factor (topic) models, and clustering. In this paper we describe an efficient and scalable Allreduce primitive for power-law data. We point out scaling problems with existing butterfly and round-robin networks for Sparse Allreduce, and show that a hybrid approach improves on both. Furthermore, we show that Sparse Allreduce stages should be nested instead of cascaded (as in the dense case). And that the optimum throughput Allreduce network should be a butterfly of heterogeneous degree where degree decreases with depth into the network. Finally, a simple replication scheme is introduced to deal with node failures. We present experiments showing significant improvements over existing systems such as PowerGraph and Hadoop
    corecore