13 research outputs found

    Finding Rumor Sources on Random Trees

    Get PDF
    We consider the problem of detecting the source of a rumor which has spread in a network using only observations about which set of nodes are infected with the rumor and with no information as to \emph{when} these nodes became infected. In a recent work \citep{ref:rc} this rumor source detection problem was introduced and studied. The authors proposed the graph score function {\em rumor centrality} as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading times for regular trees. They showed that as the size of the infected graph increases, for a path graph (2-regular tree), the probability of source detection goes to 00 while for dd-regular trees with d≥3d \geq 3 the probability of detection, say αd\alpha_d, remains bounded away from 00 and is less than 1/21/2. However, their results stop short of providing insights for the performance of the rumor centrality estimator in more general settings such as irregular trees or the SI model with non-exponential spreading times. This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a continuous time branching process and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all previous results as a special case and obtain a variety of novel results including the {\em universality} of rumor centrality in the context of tree-like graphs and the SI model with a generic spreading time distribution.Comment: 38 pages, 6 figure

    Rumor Identification with Maximum Entropy in MicroNet

    Get PDF

    Rank Centrality: Ranking from Pair-wise Comparisons

    Full text link
    The question of aggregating pair-wise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g. MSR's TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining a ranking, finding `scores' for each object (e.g. player's rating) is of interest for understanding the intensity of the preferences. In this paper, we propose Rank Centrality, an iterative rank aggregation algorithm for discovering scores for objects (or items) from pair-wise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the score, which we call Rank Centrality, of an object turns out to be its stationary probability under this random walk. To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equivalent to the Multinomial Logit (MNL) for pair-wise comparisons) in which each object has an associated score which determines the probabilistic outcomes of pair-wise comparisons between objects. In terms of the pair-wise marginal probabilities, which is the main subject of this paper, the MNL model and the BTL model are identical. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. In particular, the number of samples required to learn the score well with high probability depends on the structure of the comparison graph. When the Laplacian of the comparison graph has a strictly positive spectral gap, e.g. each item is compared to a subset of randomly chosen items, this leads to dependence on the number of samples that is nearly order-optimal.Comment: 45 pages, 3 figure

    Branch-and-Price for Prescriptive Contagion Analytics

    Full text link
    Predictive contagion models are ubiquitous in epidemiology, social sciences, engineering, and management. This paper formulates a prescriptive contagion analytics model where a decision-maker allocates shared resources across multiple segments of a population, each governed by continuous-time dynamics. We define four real-world problems under this umbrella: vaccine distribution, vaccination centers deployment, content promotion, and congestion mitigation. These problems feature a large-scale mixed-integer non-convex optimization structure with constraints governed by ordinary differential equations, combining the challenges of discrete optimization, non-linear optimization, and continuous-time system dynamics. This paper develops a branch-and-price methodology for prescriptive contagion analytics based on: (i) a set partitioning reformulation; (ii) a column generation decomposition; (iii) a state-clustering algorithm for discrete-decision continuous-state dynamic programming; and (iv) a tri-partite branching scheme to circumvent non-linearities. Extensive experiments show that the algorithm scales to very large and otherwise-intractable instances, outperforming state-of-the-art benchmarks. Our methodology provides practical benefits in contagion systems; in particular, it can increase the effectiveness of a vaccination campaign by an estimated 12-70%, resulting in 7,000 to 12,000 extra saved lives over a three-month horizon mirroring the COVID-19 pandemic. We provide an open-source implementation of the methodology in an online repository to enable replication
    corecore