7,702 research outputs found
The Complexity of Finding Effectors
The NP-hard EFFECTORS problem on directed graphs is motivated by applications
in network mining, particularly concerning the analysis of probabilistic
information-propagation processes in social networks. In the corresponding
model the arcs carry probabilities and there is a probabilistic diffusion
process activating nodes by neighboring activated nodes with probabilities as
specified by the arcs. The point is to explain a given network activation state
as well as possible by using a minimum number of "effector nodes"; these are
selected before the activation process starts.
We correct, complement, and extend previous work from the data mining
community by a more thorough computational complexity analysis of EFFECTORS,
identifying both tractable and intractable cases. To this end, we also exploit
a parameterization measuring the "degree of randomness" (the number of "really"
probabilistic arcs) which might prove useful for analyzing other probabilistic
network diffusion problems as well.Comment: 28 page
Flow-based Influence Graph Visual Summarization
Visually mining a large influence graph is appealing yet challenging. People
are amazed by pictures of newscasting graph on Twitter, engaged by hidden
citation networks in academics, nevertheless often troubled by the unpleasant
readability of the underlying visualization. Existing summarization methods
enhance the graph visualization with blocked views, but have adverse effect on
the latent influence structure. How can we visually summarize a large graph to
maximize influence flows? In particular, how can we illustrate the impact of an
individual node through the summarization? Can we maintain the appealing graph
metaphor while preserving both the overall influence pattern and fine
readability?
To answer these questions, we first formally define the influence graph
summarization problem. Second, we propose an end-to-end framework to solve the
new problem. Our method can not only highlight the flow-based influence
patterns in the visual summarization, but also inherently support rich graph
attributes. Last, we present a theoretic analysis and report our experiment
results. Both evidences demonstrate that our framework can effectively
approximate the proposed influence graph summarization objective while
outperforming previous methods in a typical scenario of visually mining
academic citation networks.Comment: to appear in IEEE International Conference on Data Mining (ICDM),
Shen Zhen, China, December 201
Model-Based Method for Social Network Clustering
We propose a simple mixed membership model for social network clustering in
this note. A flexible function is adopted to measure affinities among a set of
entities in a social network. The model not only allows each entity in the
network to possess more than one membership, but also provides accurate
statistical inference about network structure. We estimate the membership
parameters by using an MCMC algorithm. We evaluate the performance of the
proposed algorithm by applying our model to two empirical social network data,
the Zachary club data and the bottlenose dolphin network data. We also conduct
some numerical studies for different types of simulated networks for assessing
the effectiveness of our algorithm. In the end, some concluding remarks and
future work are addressed briefly
Integrated Inference and Learning of Neural Factors in Structural Support Vector Machines
Tackling pattern recognition problems in areas such as computer vision,
bioinformatics, speech or text recognition is often done best by taking into
account task-specific statistical relations between output variables. In
structured prediction, this internal structure is used to predict multiple
outputs simultaneously, leading to more accurate and coherent predictions.
Structural support vector machines (SSVMs) are nonprobabilistic models that
optimize a joint input-output function through margin-based learning. Because
SSVMs generally disregard the interplay between unary and interaction factors
during the training phase, final parameters are suboptimal. Moreover, its
factors are often restricted to linear combinations of input features, limiting
its generalization power. To improve prediction accuracy, this paper proposes:
(i) Joint inference and learning by integration of back-propagation and
loss-augmented inference in SSVM subgradient descent; (ii) Extending SSVM
factors to neural networks that form highly nonlinear functions of input
features. Image segmentation benchmark results demonstrate improvements over
conventional SSVM training methods in terms of accuracy, highlighting the
feasibility of end-to-end SSVM training with neural factors
Towards Building Deep Networks with Bayesian Factor Graphs
We propose a Multi-Layer Network based on the Bayesian framework of the
Factor Graphs in Reduced Normal Form (FGrn) applied to a two-dimensional
lattice. The Latent Variable Model (LVM) is the basic building block of a
quadtree hierarchy built on top of a bottom layer of random variables that
represent pixels of an image, a feature map, or more generally a collection of
spatially distributed discrete variables. The multi-layer architecture
implements a hierarchical data representation that, via belief propagation, can
be used for learning and inference. Typical uses are pattern completion,
correction and classification. The FGrn paradigm provides great flexibility and
modularity and appears as a promising candidate for building deep networks: the
system can be easily extended by introducing new and different (in cardinality
and in type) variables. Prior knowledge, or supervised information, can be
introduced at different scales. The FGrn paradigm provides a handy way for
building all kinds of architectures by interconnecting only three types of
units: Single Input Single Output (SISO) blocks, Sources and Replicators. The
network is designed like a circuit diagram and the belief messages flow
bidirectionally in the whole system. The learning algorithms operate only
locally within each block. The framework is demonstrated in this paper in a
three-layer structure applied to images extracted from a standard data set.Comment: Submitted for journal publicatio
Exploiting Anonymity in Approximate Linear Programming: Scaling to Large Multiagent MDPs (Extended Version)
Many exact and approximate solution methods for Markov Decision Processes
(MDPs) attempt to exploit structure in the problem and are based on
factorization of the value function. Especially multiagent settings, however,
are known to suffer from an exponential increase in value component sizes as
interactions become denser, meaning that approximation architectures are
restricted in the problem sizes and types they can handle. We present an
approach to mitigate this limitation for certain types of multiagent systems,
exploiting a property that can be thought of as "anonymous influence" in the
factored MDP. Anonymous influence summarizes joint variable effects efficiently
whenever the explicit representation of variable identity in the problem can be
avoided. We show how representational benefits from anonymity translate into
computational efficiencies, both for general variable elimination in a factor
graph but in particular also for the approximate linear programming solution to
factored MDPs. The latter allows to scale linear programming to factored MDPs
that were previously unsolvable. Our results are shown for the control of a
stochastic disease process over a densely connected graph with 50 nodes and 25
agents.Comment: Extended version of AAAI 2016 pape
- …