294 research outputs found

    Supremum-Norm Convergence for Step-Asynchronous Successive Overrelaxation on M-matrices

    Full text link
    Step-asynchronous successive overrelaxation updates the values contained in a single vector using the usual Gau\ss-Seidel-like weighted rule, but arbitrarily mixing old and new values, the only constraint being temporal coherence: you cannot use a value before it has been computed. We show that given a nonnegative real matrix AA, a σρ(A)\sigma\geq\rho(A) and a vector w>0\boldsymbol w>0 such that AwσwA\boldsymbol w\leq\sigma\boldsymbol w, every iteration of step-asynchronous successive overrelaxation for the problem (sIA)x=b(sI- A)\boldsymbol x=\boldsymbol b, with s>σs >\sigma, reduces geometrically the w\boldsymbol w-norm of the current error by a factor that we can compute explicitly. Then, we show that given a σ>ρ(A)\sigma>\rho(A) it is in principle always possible to compute such a w\boldsymbol w. This property makes it possible to estimate the supremum norm of the absolute error at each iteration without any additional hypothesis on AA, even when AA is so large that computing the product AxA\boldsymbol x is feasible, but estimating the supremum norm of (sIA)1(sI-A)^{-1} is not

    Robustness of large-scale stochastic matrices to localized perturbations

    Get PDF
    Upper bounds are derived on the total variation distance between the invariant distributions of two stochastic matrices differing on a subset W of rows. Such bounds depend on three parameters: the mixing time and the minimal expected hitting time on W for the Markov chain associated to one of the matrices; and the escape time from W for the Markov chain associated to the other matrix. These results, obtained through coupling techniques, prove particularly useful in scenarios where W is a small subset of the state space, even if the difference between the two matrices is not small in any norm. Several applications to large-scale network problems are discussed, including robustness of Google's PageRank algorithm, distributed averaging and consensus algorithms, and interacting particle systems.Comment: 12 pages, 4 figure

    Exploiting Web Matrix Permutations to Speedup PageRank Computation

    Get PDF
    Recently, the research community has devoted an increased attention to reduce the computational time needed by Web ranking algorithms. In particular, we saw many proposals to speed up the well-known PageRank algorithm used by Google. This interest is motivated by two dominant factors: (1) the Web Graph has huge dimensions and it is subject to dramatic updates in term of nodes and links - therefore PageRank assignment tends to became obsolete very soon; (2) many PageRank vectors need to be computed according to different personalization vectors chosen. In the present paper, we address this problem from a numerical point of view. First, we show how to treat dangling nodes in a way which naturally adapts to the random surfer model and preserves the sparsity of the Web Graph. This result allows to consider the PageRank computation as a sparse linear system in alternative to the commonly adopted eigenpairs interpretation. Second, we exploit the Web Matrix reducibility and compose opportunely some Web matrix permutation to speed up the PageRank computation. We tested our approaches on a Web Graphs crawled from the net. The largest one account about 24 millions nodes and more than 100 million links. Upon this Web Graph, the cost for computing the PageRank is reduced of 58% in terms of Mflops and of 89% in terms of time respect to the Power method commonly used

    Tensor Spectral Clustering for Partitioning Higher-order Network Structures

    Full text link
    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.Comment: SDM 201
    corecore