13,377 research outputs found
An Order-based Algorithm for Minimum Dominating Set with Application in Graph Mining
Dominating set is a set of vertices of a graph such that all other vertices
have a neighbour in the dominating set. We propose a new order-based randomised
local search (RLS) algorithm to solve minimum dominating set problem in
large graphs. Experimental evaluation is presented for multiple types of
problem instances. These instances include unit disk graphs, which represent a
model of wireless networks, random scale-free networks, as well as samples from
two social networks and real-world graphs studied in network science. Our
experiments indicate that RLS performs better than both a classical greedy
approximation algorithm and two metaheuristic algorithms based on ant colony
optimisation and local search. The order-based algorithm is able to find small
dominating sets for graphs with tens of thousands of vertices. In addition, we
propose a multi-start variant of RLS that is suitable for solving the
minimum weight dominating set problem. The application of RLS in graph
mining is also briefly demonstrated
Algorithms for the minimum sum coloring problem: a review
The Minimum Sum Coloring Problem (MSCP) is a variant of the well-known vertex
coloring problem which has a number of AI related applications. Due to its
theoretical and practical relevance, MSCP attracts increasing attention. The
only existing review on the problem dates back to 2004 and mainly covers the
history of MSCP and theoretical developments on specific graphs. In recent
years, the field has witnessed significant progresses on approximation
algorithms and practical solution algorithms. The purpose of this review is to
provide a comprehensive inspection of the most recent and representative MSCP
algorithms. To be informative, we identify the general framework followed by
practical solution algorithms and the key ingredients that make them
successful. By classifying the main search strategies and putting forward the
critical elements of the reviewed methods, we wish to encourage future
development of more powerful methods and motivate new applications
Conditional Reliability in Uncertain Graphs
Network reliability is a well-studied problem that requires to measure the
probability that a target node is reachable from a source node in a
probabilistic (or uncertain) graph, i.e., a graph where every edge is assigned
a probability of existence. Many approaches and problem variants have been
considered in the literature, all assuming that edge-existence probabilities
are fixed. Nevertheless, in real-world graphs, edge probabilities typically
depend on external conditions. In metabolic networks a protein can be converted
into another protein with some probability depending on the presence of certain
enzymes. In social influence networks the probability that a tweet of some user
will be re-tweeted by her followers depends on whether the tweet contains
specific hashtags. In transportation networks the probability that a network
segment will work properly or not might depend on external conditions such as
weather or time of the day. In this paper we overcome this limitation and focus
on conditional reliability, that is assessing reliability when edge-existence
probabilities depend on a set of conditions. In particular, we study the
problem of determining the k conditions that maximize the reliability between
two nodes. We deeply characterize our problem and show that, even employing
polynomial-time reliability-estimation methods, it is NP-hard, does not admit
any PTAS, and the underlying objective function is non-submodular. We then
devise a practical method that targets both accuracy and efficiency. We also
study natural generalizations of the problem with multiple source and target
nodes. An extensive empirical evaluation on several large, real-life graphs
demonstrates effectiveness and scalability of the proposed methods.Comment: 14 pages, 13 figure
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
Stability of Influence Maximization
The present article serves as an erratum to our paper of the same title,
which was presented and published in the KDD 2014 conference. In that article,
we claimed falsely that the objective function defined in Section 1.4 is
non-monotone submodular. We are deeply indebted to Debmalya Mandal, Jean
Pouget-Abadie and Yaron Singer for bringing to our attention a counter-example
to that claim.
Subsequent to becoming aware of the counter-example, we have shown that the
objective function is in fact NP-hard to approximate to within a factor of
for any .
In an attempt to fix the record, the present article combines the problem
motivation, models, and experimental results sections from the original
incorrect article with the new hardness result. We would like readers to only
cite and use this version (which will remain an unpublished note) instead of
the incorrect conference version.Comment: Erratum of Paper "Stability of Influence Maximization" which was
presented and published in the KDD1
Construction of near-optimal vertex clique covering for real-world networks
We propose a method based on combining a constructive and a bounding heuristic to solve the vertex clique covering problem (CCP), where the aim is to partition the vertices of a graph into the smallest number of classes, which induce cliques. Searching for the solution to CCP is highly motivated by analysis of social and other real-world networks, applications in graph mining, as well as by the fact that CCP is one of the classical NP-hard problems. Combining the construction and the bounding heuristic helped us not only to find high-quality clique coverings but also to determine that in the domain of real-world networks, many of the obtained solutions are optimal, while the rest of them are near-optimal. In addition, the method has a polynomial time complexity and shows much promise for its practical use. Experimental results are presented for a fairly representative benchmark of real-world data. Our test graphs include extracts of web-based social networks, including some very large ones, several well-known graphs from network science, as well as coappearance networks of literary works' characters from the DIMACS graph coloring benchmark. We also present results for synthetic pseudorandom graphs structured according to the Erdös-Renyi model and Leighton's model
- …