11,034 research outputs found

    Sketch-based Influence Maximization and Computation: Scaling up with Guarantees

    Full text link
    Propagation of contagion through networks is a fundamental process. It is used to model the spread of information, influence, or a viral infection. Diffusion patterns can be specified by a probabilistic model, such as Independent Cascade (IC), or captured by a set of representative traces. Basic computational problems in the study of diffusion are influence queries (determining the potency of a specified seed set of nodes) and Influence Maximization (identifying the most influential seed set of a given size). Answering each influence query involves many edge traversals, and does not scale when there are many queries on very large graphs. The gold standard for Influence Maximization is the greedy algorithm, which iteratively adds to the seed set a node maximizing the marginal gain in influence. Greedy has a guaranteed approximation ratio of at least (1-1/e) and actually produces a sequence of nodes, with each prefix having approximation guarantee with respect to the same-size optimum. Since Greedy does not scale well beyond a few million edges, for larger inputs one must currently use either heuristics or alternative algorithms designed for a pre-specified small seed set size. We develop a novel sketch-based design for influence computation. Our greedy Sketch-based Influence Maximization (SKIM) algorithm scales to graphs with billions of edges, with one to two orders of magnitude speedup over the best greedy methods. It still has a guaranteed approximation ratio, and in practice its quality nearly matches that of exact greedy. We also present influence oracles, which use linear-time preprocessing to generate a small sketch for each node, allowing the influence of any seed set to be quickly answered from the sketches of its nodes.Comment: 10 pages, 5 figures. Appeared at the 23rd Conference on Information and Knowledge Management (CIKM 2014) in Shanghai, Chin

    Counterfactual Inference of Second Opinions

    Get PDF

    Functional advantages offered by many-body coherences in biochemical systems

    Full text link
    Quantum coherence phenomena driven by electronic-vibrational (vibronic) interactions, are being reported in many pulse (e.g. laser) driven chemical and biophysical systems. But what systems-level advantage(s) do such many-body coherences offer to future technologies? We address this question for pulsed systems of general size N, akin to the LHCII aggregates found in green plants. We show that external pulses generate vibronic states containing particular multipartite entanglements, and that such collective vibronic states increase the excitonic transfer efficiency. The strength of these many-body coherences and their robustness to decoherence, increase with aggregate size N and do not require strong electronic-vibrational coupling. The implications for energy and information transport are discussed.Comment: arXiv admin note: text overlap with arXiv:1706.0776

    Pulsed Generation of Quantum Coherences and Non-classicality in Light-Matter Systems

    Get PDF
    We show that a pulsed stimulus can be used to generate many-body quantum coherences in light-matter systems of general size. Specifically, we calculate the exact real-time evolution of a driven, generic out-of-equilibrium system comprising an arbitrary number N qubits coupled to a global boson field. A novel form of dynamically-driven quantum coherence emerges for general N and without having to access the empirically challenging strong-coupling regime. Its properties depend on the speed of the changes in the stimulus. Non-classicalities arise within each subsystem that have eluded previous analyses. Our findings show robustness to losses and noise, and have potential functional implications at the systems level for a variety of nanosystems, including collections of N atoms, molecules, spins, or superconducting qubits in cavities -- and possibly even vibration-enhanced light harvesting processes in macromolecules.Comment: 9 pages, 4 figure

    Provably Improving Expert Predictions with Prediction Sets

    Get PDF
    Automated decision support systems promise to help human experts solve tasks more efficiently and accurately. However, existing systems typically require experts to understand when to cede agency to the system or when to exercise their own agency. Moreover, if the experts develop a misplaced trust in the system, their performance may worsen. In this work, we lift the above requirement and develop automated decision support systems that, by design, do not require experts to understand when to trust them to provably improve their performance. To this end, we focus on multiclass classification tasks and consider an automated decision support system that, for each data sample, uses a classifier to recommend a subset of labels to a human expert. We first show that, by looking at the design of such a system from the perspective of conformal prediction, we can ensure that the probability that the recommended subset of labels contains the true label matches almost exactly a target probability value. Then, we develop an efficient and near-optimal search method to find the target probability value under which the expert benefits the most from using our system. Experiments on synthetic and real data demonstrate that our system can help the experts make more accurate predictions and is robust to the accuracy of the classifier it relies on

    Scalable Influence Maximization for Multiple Products in Continuous-Time Diffusion Networks

    No full text
    A typical viral marketing model identifies influential users in a social network to maximize a single product adoption assuming unlimited user attention, campaign budgets, and time. In reality, multiple products need campaigns, users have limited attention, convincing users incurs costs, and advertisers have limited budgets and expect the adoptions to be maximized soon. Facing these user, monetary, and timing constraints, we formulate the problem as a submodular maximization task in a continuous-time diffusion model under the intersection of a matroid and multiple knapsack constraints. We propose a randomized algorithm estimating the user influence in a network (∣V∣|\mathcal{V}| nodes, ∣E∣|\mathcal{E}| edges) to an accuracy of ϵ\epsilon with n=O(1/ϵ2)n=\mathcal{O}(1/\epsilon^2) randomizations and O~(n∣E∣+n∣V∣)\tilde{\mathcal{O}}(n|\mathcal{E}|+n|\mathcal{V}|) computations. By exploiting the influence estimation algorithm as a subroutine, we develop an adaptive threshold greedy algorithm achieving an approximation factor ka/(2+2k)k_a/(2+2 k) of the optimal when kak_a out of the kk knapsack constraints are active. Extensive experiments on networks of millions of nodes demonstrate that the proposed algorithms achieve the state-of-the-art in terms of effectiveness and scalability

    Aspects of the moduli space of instantons on C

    Full text link

    Distilling Information Reliability and Source Trustworthiness from Digital Traces

    Full text link
    Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their content. These evaluations can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy evaluations, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we argue that the temporal traces left by these noisy evaluations give cues on the reliability of the information and the trustworthiness of the sources. Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness. Furthermore, we develop an efficient convex optimization procedure to learn the parameters of the model from historical traces. Experiments on real-world data gathered from Wikipedia and Stack Overflow show that our modeling framework accurately predicts evaluation events, provides an interpretable measure of information reliability and source trustworthiness, and yields interesting insights about real-world events.Comment: Accepted at 26th World Wide Web conference (WWW-17
    • …
    corecore