1,672 research outputs found
Finding the Graph of Epidemic Cascades
We consider the problem of finding the graph on which an epidemic cascade
spreads, given only the times when each node gets infected. While this is a
problem of importance in several contexts -- offline and online social
networks, e-commerce, epidemiology, vulnerabilities in infrastructure networks
-- there has been very little work, analytical or empirical, on finding the
graph. Clearly, it is impossible to do so from just one cascade; our interest
is in learning the graph from a small number of cascades.
For the classic and popular "independent cascade" SIR epidemics, we
analytically establish the number of cascades required by both the global
maximum-likelihood (ML) estimator, and a natural greedy algorithm. Both results
are based on a key observation: the global graph learning problem decouples
into local problems -- one for each node. For a node of degree , we show
that its neighborhood can be reliably found once it has been infected times (for ML on general graphs) or times (for greedy on
trees). We also provide a corresponding information-theoretic lower bound of
; thus our bounds are essentially tight. Furthermore, if we
are given side-information in the form of a super-graph of the actual graph (as
is often the case), then the number of cascade samples required -- in all cases
-- becomes independent of the network size .
Finally, we show that for a very general SIR epidemic cascade model, the
Markov graph of infection times is obtained via the moralization of the network
graph.Comment: To appear in Proc. ACM SIGMETRICS/Performance 201
Controlled Information Fusion with Risk-Averse CVaR Social Sensors
Consider a multi-agent network comprised of risk averse social sensors and a
controller that jointly seek to estimate an unknown state of nature, given
noisy measurements. The network of social sensors perform Bayesian social
learning - each sensor fuses the information revealed by previous social
sensors along with its private valuation using Bayes' rule - to optimize a
local cost function. The controller sequentially modifies the cost function of
the sensors by discriminatory pricing (control inputs) to realize long term
global objectives. We formulate the stochastic control problem faced by the
controller as a Partially Observed Markov Decision Process (POMDP) and derive
structural results for the optimal control policy as a function of the
risk-aversion factor in the Conditional Value-at-Risk (CVaR) cost function of
the sensors. We show that the optimal price sequence when the sensors are risk-
averse is a super-martingale; i.e, it decreases on average over time.Comment: IEEE CDC 201
- …