55,471 research outputs found
Sketch-based Influence Maximization and Computation: Scaling up with Guarantees
Propagation of contagion through networks is a fundamental process. It is
used to model the spread of information, influence, or a viral infection.
Diffusion patterns can be specified by a probabilistic model, such as
Independent Cascade (IC), or captured by a set of representative traces.
Basic computational problems in the study of diffusion are influence queries
(determining the potency of a specified seed set of nodes) and Influence
Maximization (identifying the most influential seed set of a given size).
Answering each influence query involves many edge traversals, and does not
scale when there are many queries on very large graphs. The gold standard for
Influence Maximization is the greedy algorithm, which iteratively adds to the
seed set a node maximizing the marginal gain in influence. Greedy has a
guaranteed approximation ratio of at least (1-1/e) and actually produces a
sequence of nodes, with each prefix having approximation guarantee with respect
to the same-size optimum. Since Greedy does not scale well beyond a few million
edges, for larger inputs one must currently use either heuristics or
alternative algorithms designed for a pre-specified small seed set size.
We develop a novel sketch-based design for influence computation. Our greedy
Sketch-based Influence Maximization (SKIM) algorithm scales to graphs with
billions of edges, with one to two orders of magnitude speedup over the best
greedy methods. It still has a guaranteed approximation ratio, and in practice
its quality nearly matches that of exact greedy. We also present influence
oracles, which use linear-time preprocessing to generate a small sketch for
each node, allowing the influence of any seed set to be quickly answered from
the sketches of its nodes.Comment: 10 pages, 5 figures. Appeared at the 23rd Conference on Information
and Knowledge Management (CIKM 2014) in Shanghai, Chin
Greedy Maximization Framework for Graph-based Influence Functions
The study of graph-based submodular maximization problems was initiated in a
seminal work of Kempe, Kleinberg, and Tardos (2003): An {\em influence}
function of subsets of nodes is defined by the graph structure and the aim is
to find subsets of seed nodes with (approximately) optimal tradeoff of size and
influence. Applications include viral marketing, monitoring, and active
learning of node labels. This powerful formulation was studied for
(generalized) {\em coverage} functions, where the influence of a seed set on a
node is the maximum utility of a seed item to the node, and for pairwise {\em
utility} based on reachability, distances, or reverse ranks.
We define a rich class of influence functions which unifies and extends
previous work beyond coverage functions and specific utility functions. We
present a meta-algorithm for approximate greedy maximization with strong
approximation quality guarantees and worst-case near-linear computation for all
functions in our class. Our meta-algorithm generalizes a recent design by Cohen
et al (2014) that was specific for distance-based coverage functions.Comment: 8 pages, 1 figur
From Competition to Complementarity: Comparative Influence Diffusion and Maximization
Influence maximization is a well-studied problem that asks for a small set of
influential users from a social network, such that by targeting them as early
adopters, the expected total adoption through influence cascades over the
network is maximized. However, almost all prior work focuses on cascades of a
single propagating entity or purely-competitive entities. In this work, we
propose the Comparative Independent Cascade (Com-IC) model that covers the full
spectrum of entity interactions from competition to complementarity. In Com-IC,
users' adoption decisions depend not only on edge-level information
propagation, but also on a node-level automaton whose behavior is governed by a
set of model parameters, enabling our model to capture not only competition,
but also complementarity, to any possible degree. We study two natural
optimization problems, Self Influence Maximization and Complementary Influence
Maximization, in a novel setting with complementary entities. Both problems are
NP-hard, and we devise efficient and effective approximation algorithms via
non-trivial techniques based on reverse-reachable sets and a novel "sandwich
approximation". The applicability of both techniques extends beyond our model
and problems. Our experiments show that the proposed algorithms consistently
outperform intuitive baselines in four real-world social networks, often by a
significant margin. In addition, we learn model parameters from real user
action logs.Comment: An abridged of this work is to appear in the Proceedings of VLDB
Endowment (PVDLB), Vol 9, No 2. Also, the paper will be presented in the VLDB
2016 conference in New Delhi, India. This update contains new theoretical and
experimental results, and the paper is now in single-column format (44 pages
Influence Maximization: Near-Optimal Time Complexity Meets Practical Efficiency
Given a social network G and a constant k, the influence maximization problem
asks for k nodes in G that (directly and indirectly) influence the largest
number of nodes under a pre-defined diffusion model. This problem finds
important applications in viral marketing, and has been extensively studied in
the literature. Existing algorithms for influence maximization, however, either
trade approximation guarantees for practical efficiency, or vice versa. In
particular, among the algorithms that achieve constant factor approximations
under the prominent independent cascade (IC) model or linear threshold (LT)
model, none can handle a million-node graph without incurring prohibitive
overheads.
This paper presents TIM, an algorithm that aims to bridge the theory and
practice in influence maximization. On the theory side, we show that TIM runs
in O((k+\ell) (n+m) \log n / \epsilon^2) expected time and returns a
(1-1/e-\epsilon)-approximate solution with at least 1 - n^{-\ell} probability.
The time complexity of TIM is near-optimal under the IC model, as it is only a
\log n factor larger than the \Omega(m + n) lower-bound established in previous
work (for fixed k, \ell, and \epsilon). Moreover, TIM supports the triggering
model, which is a general diffusion model that includes both IC and LT as
special cases. On the practice side, TIM incorporates novel heuristics that
significantly improve its empirical efficiency without compromising its
asymptotic performance. We experimentally evaluate TIM with the largest
datasets ever tested in the literature, and show that it outperforms the
state-of-the-art solutions (with approximation guarantees) by up to four orders
of magnitude in terms of running time. In particular, when k = 50, \epsilon =
0.2, and \ell = 1, TIM requires less than one hour on a commodity machine to
process a network with 41.6 million nodes and 1.4 billion edges.Comment: Revised Sections 1, 2.3, and 5 to remove incorrect claims about
reference [3]. Updated experiments accordingly. A shorter version of the
paper will appear in SIGMOD 201
- …