160 research outputs found
Influence Maximization: Near-Optimal Time Complexity Meets Practical Efficiency
Given a social network G and a constant k, the influence maximization problem
asks for k nodes in G that (directly and indirectly) influence the largest
number of nodes under a pre-defined diffusion model. This problem finds
important applications in viral marketing, and has been extensively studied in
the literature. Existing algorithms for influence maximization, however, either
trade approximation guarantees for practical efficiency, or vice versa. In
particular, among the algorithms that achieve constant factor approximations
under the prominent independent cascade (IC) model or linear threshold (LT)
model, none can handle a million-node graph without incurring prohibitive
overheads.
This paper presents TIM, an algorithm that aims to bridge the theory and
practice in influence maximization. On the theory side, we show that TIM runs
in O((k+\ell) (n+m) \log n / \epsilon^2) expected time and returns a
(1-1/e-\epsilon)-approximate solution with at least 1 - n^{-\ell} probability.
The time complexity of TIM is near-optimal under the IC model, as it is only a
\log n factor larger than the \Omega(m + n) lower-bound established in previous
work (for fixed k, \ell, and \epsilon). Moreover, TIM supports the triggering
model, which is a general diffusion model that includes both IC and LT as
special cases. On the practice side, TIM incorporates novel heuristics that
significantly improve its empirical efficiency without compromising its
asymptotic performance. We experimentally evaluate TIM with the largest
datasets ever tested in the literature, and show that it outperforms the
state-of-the-art solutions (with approximation guarantees) by up to four orders
of magnitude in terms of running time. In particular, when k = 50, \epsilon =
0.2, and \ell = 1, TIM requires less than one hour on a commodity machine to
process a network with 41.6 million nodes and 1.4 billion edges.Comment: Revised Sections 1, 2.3, and 5 to remove incorrect claims about
reference [3]. Updated experiments accordingly. A shorter version of the
paper will appear in SIGMOD 201
Keyword-aware Optimal Route Search
Identifying a preferable route is an important problem that finds
applications in map services. When a user plans a trip within a city, the user
may want to find "a most popular route such that it passes by shopping mall,
restaurant, and pub, and the travel time to and from his hotel is within 4
hours." However, none of the algorithms in the existing work on route planning
can be used to answer such queries. Motivated by this, we define the problem of
keyword-aware optimal route query, denoted by KOR, which is to find an optimal
route such that it covers a set of user-specified keywords, a specified budget
constraint is satisfied, and an objective score of the route is optimal. The
problem of answering KOR queries is NP-hard. We devise an approximation
algorithm OSScaling with provable approximation bounds. Based on this
algorithm, another more efficient approximation algorithm BucketBound is
proposed. We also design a greedy approximation algorithm. Results of empirical
studies show that all the proposed algorithms are capable of answering KOR
queries efficiently, while the BucketBound and Greedy algorithms run faster.
The empirical studies also offer insight into the accuracy of the proposed
algorithms.Comment: VLDB201
GraphMP: An Efficient Semi-External-Memory Big Graph Processing System on a Single Machine
Recent studies showed that single-machine graph processing systems can be as
highly competitive as cluster-based approaches on large-scale problems. While
several out-of-core graph processing systems and computation models have been
proposed, the high disk I/O overhead could significantly reduce performance in
many practical cases. In this paper, we propose GraphMP to tackle big graph
analytics on a single machine. GraphMP achieves low disk I/O overhead with
three techniques. First, we design a vertex-centric sliding window (VSW)
computation model to avoid reading and writing vertices on disk. Second, we
propose a selective scheduling method to skip loading and processing
unnecessary edge shards on disk. Third, we use a compressed edge cache
mechanism to fully utilize the available memory of a machine to reduce the
amount of disk accesses for edges. Extensive evaluations have shown that
GraphMP could outperform state-of-the-art systems such as GraphChi, X-Stream
and GridGraph by 31.6x, 54.5x and 23.1x respectively, when running popular
graph applications on a billion-vertex graph
GraphH: High Performance Big Graph Analytics in Small Clusters
It is common for real-world applications to analyze big graphs using
distributed graph processing systems. Popular in-memory systems require an
enormous amount of resources to handle big graphs. While several out-of-core
approaches have been proposed for processing big graphs on disk, the high disk
I/O overhead could significantly reduce performance. In this paper, we propose
GraphH to enable high-performance big graph analytics in small clusters.
Specifically, we design a two-stage graph partition scheme to evenly divide the
input graph into partitions, and propose a GAB (Gather-Apply-Broadcast)
computation model to make each worker process a partition in memory at a time.
We use an edge cache mechanism to reduce the disk I/O overhead, and design a
hybrid strategy to improve the communication performance. GraphH can
efficiently process big graphs in small clusters or even a single commodity
server. Extensive evaluations have shown that GraphH could be up to 7.8x faster
compared to popular in-memory systems, such as Pregel+ and PowerGraph when
processing generic graphs, and more than 100x faster than recently proposed
out-of-core systems, such as GraphD and Chaos when processing big graphs
- …