1,898 research outputs found

    A Novel Approach to Finding Near-Cliques: The Triangle-Densest Subgraph Problem

    Full text link
    Many graph mining applications rely on detecting subgraphs which are near-cliques. There exists a dichotomy between the results in the existing work related to this problem: on the one hand the densest subgraph problem (DSP) which maximizes the average degree over all subgraphs is solvable in polynomial time but for many networks fails to find subgraphs which are near-cliques. On the other hand, formulations that are geared towards finding near-cliques are NP-hard and frequently inapproximable due to connections with the Maximum Clique problem. In this work, we propose a formulation which combines the best of both worlds: it is solvable in polynomial time and finds near-cliques when the DSP fails. Surprisingly, our formulation is a simple variation of the DSP. Specifically, we define the triangle densest subgraph problem (TDSP): given G(V,E)G(V,E), find a subset of vertices SS^* such that τ(S)=maxSVt(S)S\tau(S^*)=\max_{S \subseteq V} \frac{t(S)}{|S|}, where t(S)t(S) is the number of triangles induced by the set SS. We provide various exact and approximation algorithms which the solve the TDSP efficiently. Furthermore, we show how our algorithms adapt to the more general problem of maximizing the kk-clique average density. Finally, we provide empirical evidence that the TDSP should be used whenever the output of the DSP fails to output a near-clique.Comment: 42 page

    Efficient External-Memory Algorithms for Graph Mining

    Get PDF
    The explosion of big data in areas like the web and social networks has posed big challenges to research activities, including data mining, information retrieval, security etc. This dissertation focuses on a particular area, graph mining, and specifically proposes several novel algorithms to solve the problems of triangle listing and computation of neighborhood function in large-scale graphs. We first study the classic problem of triangle listing. We generalize the existing in-memory algorithms into a single framework of 18 triangle-search techniques. We then develop a novel external-memory approach, which we call Pruned Companion Files (PCF), that supports disk operation of all 18 algorithms. When compared to state-of-the-art available implementations MGT and PDTL, PCF runs 5-10 times faster and exhibits orders of magnitude less I/O. We next focus on I/O complexity of triangle listing. Recent work by Pagh etc. provides an appealing theoretical I/O complexity for triangle listing via graph partitioning by random coloring of nodes. Since no implementation of Pagh is available and little is known about the comparison between Pagh and PCF, we carefully implement Pagh, undertake an investigation into the properties of these algorithms, model their I/O cost, understand their shortcomings, and shed light on the conditions under which each method defeats the other. This insight leads us to develop a novel framework we call Trigon that surpasses the I/O performance of both techniques in all graphs and under all RAM conditions. We finally turn our attention to neighborhood function. Exact computation of neighborhood function is expensive in terms of CPU and I/O cost. Previous work mostly focuses on approximations. We show that our novel techniques developed for triangle listing can also be applied to this problem. We next study an application of neighborhood function to ranking of Internet hosts. Our method computes neighborhood functions for each host as an indication of its reputation. The evaluation shows that our method is robust to ranking manipulation and brings less spam to its top ranking list compared to PageRank and TrustRank

    Dynamic Set Intersection

    Full text link
    Consider the problem of maintaining a family FF of dynamic sets subject to insertions, deletions, and set-intersection reporting queries: given S,SFS,S'\in F, report every member of SSS\cap S' in any order. We show that in the word RAM model, where ww is the word size, given a cap dd on the maximum size of any set, we can support set intersection queries in O(dw/log2w)O(\frac{d}{w/\log^2 w}) expected time, and updates in O(logw)O(\log w) expected time. Using this algorithm we can list all tt triangles of a graph G=(V,E)G=(V,E) in O(m+mαw/log2w+t)O(m+\frac{m\alpha}{w/\log^2 w} +t) expected time, where m=Em=|E| and α\alpha is the arboricity of GG. This improves a 30-year old triangle enumeration algorithm of Chiba and Nishizeki running in O(mα)O(m \alpha) time. We provide an incremental data structure on FF that supports intersection {\em witness} queries, where we only need to find {\em one} eSSe\in S\cap S'. Both queries and insertions take O\paren{\sqrt \frac{N}{w/\log^2 w}} expected time, where N=SFSN=\sum_{S\in F} |S|. Finally, we provide time/space tradeoffs for the fully dynamic set intersection reporting problem. Using MM words of space, each update costs O(MlogN)O(\sqrt {M \log N}) expected time, each reporting query costs O(NlogNMop+1)O(\frac{N\sqrt{\log N}}{\sqrt M}\sqrt{op+1}) expected time where opop is the size of the output, and each witness query costs O(NlogNM+logN)O(\frac{N\sqrt{\log N}}{\sqrt M} + \log N) expected time.Comment: Accepted to WADS 201
    corecore