12,574 research outputs found
Exact Computation of Influence Spread by Binary Decision Diagrams
Evaluating influence spread in social networks is a fundamental procedure to
estimate the word-of-mouth effect in viral marketing. There are enormous
studies about this topic; however, under the standard stochastic cascade
models, the exact computation of influence spread is known to be #P-hard. Thus,
the existing studies have used Monte-Carlo simulation-based approximations to
avoid exact computation.
We propose the first algorithm to compute influence spread exactly under the
independent cascade model. The algorithm first constructs binary decision
diagrams (BDDs) for all possible realizations of influence spread, then
computes influence spread by dynamic programming on the constructed BDDs. To
construct the BDDs efficiently, we designed a new frontier-based search-type
procedure. The constructed BDDs can also be used to solve other
influence-spread related problems, such as random sampling without rejection,
conditional influence spread evaluation, dynamic probability update, and
gradient computation for probability optimization problems.
We conducted computational experiments to evaluate the proposed algorithm.
The algorithm successfully computed influence spread on real-world networks
with a hundred edges in a reasonable time, which is quite impossible by the
naive algorithm. We also conducted an experiment to evaluate the accuracy of
the Monte-Carlo simulation-based approximation by comparing exact influence
spread obtained by the proposed algorithm.Comment: WWW'1
Order-of-Magnitude Influence Diagrams
In this paper, we develop a qualitative theory of influence diagrams that can
be used to model and solve sequential decision making tasks when only
qualitative (or imprecise) information is available. Our approach is based on
an order-of-magnitude approximation of both probabilities and utilities and
allows for specifying partially ordered preferences via sets of utility values.
We also propose a dedicated variable elimination algorithm that can be applied
for solving order-of-magnitude influence diagrams
International Competition on Graph Counting Algorithms 2023
This paper reports on the details of the International Competition on Graph
Counting Algorithms (ICGCA) held in 2023. The graph counting problem is to
count the subgraphs satisfying specified constraints on a given graph. The
problem belongs to #P-complete, a computationally tough class. Since many
essential systems in modern society, e.g., infrastructure networks, are often
represented as graphs, graph counting algorithms are a key technology to
efficiently scan all the subgraphs representing the feasible states of the
system. In the ICGCA, contestants were asked to count the paths on a graph
under a length constraint. The benchmark set included 150 challenging
instances, emphasizing graphs resembling infrastructure networks. Eleven
solvers were submitted and ranked by the number of benchmarks correctly solved
within a time limit. The winning solver, TLDC, was designed based on three
fundamental approaches: backtracking search, dynamic programming, and model
counting or #SAT (a counting version of Boolean satisfiability). Detailed
analyses show that each approach has its own strengths, and one approach is
unlikely to dominate the others. The codes and papers of the participating
solvers are available: https://afsa.jp/icgca/.Comment: https://afsa.jp/icgca
Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data
We consider learning, from strictly behavioral data, the structure and
parameters of linear influence games (LIGs), a class of parametric graphical
games introduced by Irfan and Ortiz (2014). LIGs facilitate causal strategic
inference (CSI): Making inferences from causal interventions on stable behavior
in strategic settings. Applications include the identification of the most
influential individuals in large (social) networks. Such tasks can also support
policy-making analysis. Motivated by the computational work on LIGs, we cast
the learning problem as maximum-likelihood estimation (MLE) of a generative
model defined by pure-strategy Nash equilibria (PSNE). Our simple formulation
uncovers the fundamental interplay between goodness-of-fit and model
complexity: good models capture equilibrium behavior within the data while
controlling the true number of equilibria, including those unobserved. We
provide a generalization bound establishing the sample complexity for MLE in
our framework. We propose several algorithms including convex loss minimization
(CLM) and sigmoidal approximations. We prove that the number of exact PSNE in
LIGs is small, with high probability; thus, CLM is sound. We illustrate our
approach on synthetic data and real-world U.S. congressional voting records. We
briefly discuss our learning framework's generality and potential applicability
to general graphical games.Comment: Journal of Machine Learning Research. (accepted, pending
publication.) Last conference version: submitted March 30, 2012 to UAI 2012.
First conference version: entitled, Learning Influence Games, initially
submitted on June 1, 2010 to NIPS 201
A Computational Framework for Efficient Reliability Analysis of Complex Networks
With the growing scale and complexity of modern infrastructure networks comes the challenge of developing efficient and dependable methods for analysing their reliability. Special attention must be given to potential network interdependencies as disregarding these can lead to catastrophic failures. Furthermore, it is of paramount importance to properly treat all uncertainties. The survival signature is a recent development built to effectively analyse complex networks that far exceeds standard techniques in several important areas. Its most distinguishing feature is the complete separation of system structure from probabilistic information. Because of this, it is possible to take into account a variety of component failure phenomena such as dependencies, common causes of failure, and imprecise probabilities without reevaluating the network structure.
This cumulative dissertation presents several key improvements to the survival signature ecosystem focused on the structural evaluation of the system as well as the modelling of component failures.
A new method is presented in which (inter)-dependencies between components and networks are modelled using vine copulas. Furthermore, aleatory and epistemic uncertainties are included by applying probability boxes and imprecise copulas. By leveraging the large number of available copula families it is possible to account for varying dependent effects. The graph-based design of vine copulas synergizes well with the typical descriptions of network topologies. The proposed method is tested on a challenging scenario using the IEEE reliability test system, demonstrating its usefulness and emphasizing the ability to represent complicated scenarios with a range of dependent failure modes.
The numerical effort required to analytically compute the survival signature is prohibitive for large complex systems. This work presents two methods for the approximation of the survival signature. In the first approach system configurations of low interest are excluded using percolation theory, while the remaining parts of the signature are estimated by Monte Carlo simulation. The method is able to accurately approximate the survival signature with very small errors while drastically reducing computational demand. Several simple test systems, as well as two real-world situations, are used to show the accuracy and performance.
However, with increasing network size and complexity this technique also reaches its limits. A second method is presented where the numerical demand is further reduced. Here, instead of approximating the whole survival signature only a few strategically selected values are computed using Monte Carlo simulation and used to build a surrogate model based on normalized radial basis functions. The uncertainty resulting from the approximation of the data points is then propagated through an interval predictor model which estimates bounds for the remaining survival signature values. This imprecise model provides bounds on the survival signature and therefore the network reliability. Because a few data points are sufficient to build the interval predictor model it allows for even larger systems to be analysed.
With the rising complexity of not just the system but also the individual components themselves comes the need for the components to be modelled as subsystems in a system-of-systems approach. A study is presented, where a previously developed framework for resilience decision-making is adapted to multidimensional scenarios in which the subsystems are represented as survival signatures. The survival signature of the subsystems can be computed ahead of the resilience analysis due to the inherent separation of structural information. This enables efficient analysis in which the failure rates of subsystems for various resilience-enhancing endowments are calculated directly from the survival function without reevaluating the system structure.
In addition to the advancements in the field of survival signature, this work also presents a new framework for uncertainty quantification developed as a package in the Julia programming language called UncertaintyQuantification.jl. Julia is a modern high-level dynamic programming language that is ideal for applications such as data analysis and scientific computing. UncertaintyQuantification.jl was built from the ground up to be generalised and versatile while remaining simple to use. The framework is in constant development and its goal is to become a toolbox encompassing state-of-the-art algorithms from all fields of uncertainty quantification and to serve as a valuable tool for both research and industry. UncertaintyQuantification.jl currently includes simulation-based reliability analysis utilising a wide range of sampling schemes, local and global sensitivity analysis, and surrogate modelling methodologies
Development and Implementation of a Direct Evaluation Solution for Fault Tree Analyses Competing With Traditional Minimal Cut Sets Methods
Fault tree analysis (FTA) is a well-established technique to analyze the safety risks of a system. Two specific prominent FTA methods, largely applied in the aerospace field, are the so-called minimal cut sets (MCS), which uses an approximate evaluation of the problem, and the direct evaluation (DE) of the fault tree, which uses a top-down recursive algorithm. The first approach is only valid for small values of basic event probabilities and has historically yielded faster results than exact solutions for complex fault trees. The second one means exact solutions at a higher computational cost. This article presents several improvements applied to both approaches in order to upgrade the computing performance. First, improvements to the MCS approach have been performed, where the main idea has been to optimize the number of required permutations and to take advantage of the available information from previous solved subsets. Second, improvements to the DE approach have been applied, which deal with a reduction of the number of recursive calls through a deep search for independent events in the fault tree. This could dramatically reduce the computation time for industrial fault trees with a high number of repeated events. Additional implementation improvements have been also applied regarding hash tables, and memory access and usage, but also implementing the so-called “virtual gates”, which enable limitless children on each gate. The results presented hereafter are promising, not only because they show that both approaches have been highly optimized compared to the literature, but also because a DE solution has been achieved, which can compete in time resources (and obviously in precision) with the MCS approach. These improvements are relevant when considering the industrial, and more specifically the aeronautical, implementation and application of both techniques.The author Jordi Pons‐Prats acknowledges the support from Serra Hunter programme, Generalitat de Catalunya, as well as the support through the Severo Ochoa Centre of Excellence (2019‐2023) under the grant CEX2018‐000797‐S funded by MCIN/AEI/10.13039/501100011033.Postprint (author's final draft
The Solution Distribution of Influence Maximization: A High-level Experimental Study on Three Algorithmic Approaches
Influence maximization is among the most fundamental algorithmic problems in
social influence analysis. Over the last decade, a great effort has been
devoted to developing efficient algorithms for influence maximization, so that
identifying the ``best'' algorithm has become a demanding task. In SIGMOD'17,
Arora, Galhotra, and Ranu reported benchmark results on eleven existing
algorithms and demonstrated that there is no single state-of-the-art offering
the best trade-off between computational efficiency and solution quality.
In this paper, we report a high-level experimental study on three
well-established algorithmic approaches for influence maximization, referred to
as Oneshot, Snapshot, and Reverse Influence Sampling (RIS). Different from
Arora et al., our experimental methodology is so designed that we examine the
distribution of random solutions, characterize the relation between the sample
number and the actual solution quality, and avoid implementation dependencies.
Our main findings are as follows: 1. For a sufficiently large sample number, we
obtain a unique solution regardless of algorithms. 2. The average solution
quality of Oneshot, Snapshot, and RIS improves at the same rate up to scaling
of sample number. 3. Oneshot requires more samples than Snapshot, and Snapshot
requires fewer but larger samples than RIS. We discuss the time efficiency when
conditioning Oneshot, Snapshot, and RIS to be of identical accuracy. Our
conclusion is that Oneshot is suitable only if the size of available memory is
limited, and RIS is more efficient than Snapshot for large networks; Snapshot
is preferable for small, low-probability networks.Comment: To appear in SIGMOD 202
- …