26,989 research outputs found
Explaining Snapshots of Network Diffusions: Structural and Hardness Results
Much research has been done on studying the diffusion of ideas or
technologies on social networks including the \textit{Influence Maximization}
problem and many of its variations. Here, we investigate a type of inverse
problem. Given a snapshot of the diffusion process, we seek to understand if
the snapshot is feasible for a given dynamic, i.e., whether there is a limited
number of nodes whose initial adoption can result in the snapshot in finite
time. While similar questions have been considered for epidemic dynamics, here,
we consider this problem for variations of the deterministic Linear Threshold
Model, which is more appropriate for modeling strategic agents. Specifically,
we consider both sequential and simultaneous dynamics when deactivations are
allowed and when they are not. Even though we show hardness results for all
variations we consider, we show that the case of sequential dynamics with
deactivations allowed is significantly harder than all others. In contrast,
sequential dynamics make the problem trivial on cliques even though it's
complexity for simultaneous dynamics is unknown. We complement our hardness
results with structural insights that can help better understand diffusions of
social networks under various dynamics.Comment: 14 pages, 3 figure
Online Influence Maximization in Non-Stationary Social Networks
Social networks have been popular platforms for information propagation. An
important use case is viral marketing: given a promotion budget, an advertiser
can choose some influential users as the seed set and provide them free or
discounted sample products; in this way, the advertiser hopes to increase the
popularity of the product in the users' friend circles by the world-of-mouth
effect, and thus maximizes the number of users that information of the
production can reach. There has been a body of literature studying the
influence maximization problem. Nevertheless, the existing studies mostly
investigate the problem on a one-off basis, assuming fixed known influence
probabilities among users, or the knowledge of the exact social network
topology. In practice, the social network topology and the influence
probabilities are typically unknown to the advertiser, which can be varying
over time, i.e., in cases of newly established, strengthened or weakened social
ties. In this paper, we focus on a dynamic non-stationary social network and
design a randomized algorithm, RSB, based on multi-armed bandit optimization,
to maximize influence propagation over time. The algorithm produces a sequence
of online decisions and calibrates its explore-exploit strategy utilizing
outcomes of previous decisions. It is rigorously proven to achieve an
upper-bounded regret in reward and applicable to large-scale social networks.
Practical effectiveness of the algorithm is evaluated using both synthetic and
real-world datasets, which demonstrates that our algorithm outperforms previous
stationary methods under non-stationary conditions.Comment: 10 pages. To appear in IEEE/ACM IWQoS 2016. Full versio
Simultaneous influencing and mapping for health interventions
Influence Maximization is an active topic, but it was always assumed full knowledge of the social network graph. However, the graph may actually be unknown beforehand. For example, when selecting a subset of a homeless population to attend interventions concerning health, we deal with a network that is not fully known. Hence, we introduce the novel problem of simultaneously influencing and mapping (i.e., learning) the graph. We study a class of algorithms, where we show that: (i) traditional algorithms may have arbitrarily low performance; (ii) we can effectively influence and map when the independence of objectives hypothesis holds; (iii) when it does not hold, the upper bound for the influence loss converges to 0. We run extensive experiments over four real-life social networks, where we study two alternative models, and obtain significantly better results in both than traditional approaches
Online influence mximization in non-stationary social networks
Social networks have been popular platforms for information propagation. An important use case is viral marketing: given a promotion budget, an advertiser can choose some influential users as the seed set and provide them free or discounted sample products; in this way, the advertiser hopes to increase the popularity of the product in the users' friend circles by the world-of-mouth effect, and thus maximizes the number of users that information of the production can reach. There has been a body of literature studying the influence maximization problem. Nevertheless, the existing studies mostly investigate the problem on a one-off basis, assuming fixed known influence probabilities among users, or the knowledge of the exact social network topology. In practice, the social network topology and the influence probabilities are typically unknown to the advertiser, which can be varying over time, i.e., in cases of newly established, strengthened or weakened social ties. In this paper, we focus on a dynamic non-stationary social network and design a randomized algorithm, RSB, based on multi-armed bandit optimization, to maximize influence propagation over time. The algorithm produces a sequence of online decisions and calibrates its explore-exploit strategy utilizing outcomes of previous decisions. It is rigorously proven to achieve an upper-bounded regret in reward and applicable to large-scale social networks. Practical effectiveness of the algorithm is evaluated using both synthetic and real-world datasets, which demonstrates that our algorithm outperforms previous stationary methods under non-stationary conditions.postprin
A Survey on Influence Maximization: From an ML-Based Combinatorial Optimization
Influence Maximization (IM) is a classical combinatorial optimization
problem, which can be widely used in mobile networks, social computing, and
recommendation systems. It aims at selecting a small number of users such that
maximizing the influence spread across the online social network. Because of
its potential commercial and academic value, there are a lot of researchers
focusing on studying the IM problem from different perspectives. The main
challenge comes from the NP-hardness of the IM problem and \#P-hardness of
estimating the influence spread, thus traditional algorithms for overcoming
them can be categorized into two classes: heuristic algorithms and
approximation algorithms. However, there is no theoretical guarantee for
heuristic algorithms, and the theoretical design is close to the limit.
Therefore, it is almost impossible to further optimize and improve their
performance. With the rapid development of artificial intelligence, the
technology based on Machine Learning (ML) has achieved remarkable achievements
in many fields. In view of this, in recent years, a number of new methods have
emerged to solve combinatorial optimization problems by using ML-based
techniques. These methods have the advantages of fast solving speed and strong
generalization ability to unknown graphs, which provide a brand-new direction
for solving combinatorial optimization problems. Therefore, we abandon the
traditional algorithms based on iterative search and review the recent
development of ML-based methods, especially Deep Reinforcement Learning, to
solve the IM problem and other variants in social networks. We focus on
summarizing the relevant background knowledge, basic principles, common
methods, and applied research. Finally, the challenges that need to be solved
urgently in future IM research are pointed out.Comment: 45 page
- …