25,677 research outputs found
The Graph Motif problem parameterized by the structure of the input graph
The Graph Motif problem was introduced in 2006 in the context of biological
networks. It consists of deciding whether or not a multiset of colors occurs in
a connected subgraph of a vertex-colored graph. Graph Motif has been mostly
analyzed from the standpoint of parameterized complexity. The main parameters
which came into consideration were the size of the multiset and the number of
colors. Though, in the many applications of Graph Motif, the input graph
originates from real-life and has structure. Motivated by this prosaic
observation, we systematically study its complexity relatively to graph
structural parameters. For a wide range of parameters, we give new or improved
FPT algorithms, or show that the problem remains intractable. For the FPT
cases, we also give some kernelization lower bounds as well as some ETH-based
lower bounds on the worst case running time. Interestingly, we establish that
Graph Motif is W[1]-hard (while in W[P]) for parameter max leaf number, which
is, to the best of our knowledge, the first problem to behave this way.Comment: 24 pages, accepted in DAM, conference version in IPEC 201
Generating realistic scaled complex networks
Research on generative models is a central project in the emerging field of
network science, and it studies how statistical patterns found in real networks
could be generated by formal rules. Output from these generative models is then
the basis for designing and evaluating computational methods on networks, and
for verification and simulation studies. During the last two decades, a variety
of models has been proposed with an ultimate goal of achieving comprehensive
realism for the generated networks. In this study, we (a) introduce a new
generator, termed ReCoN; (b) explore how ReCoN and some existing models can be
fitted to an original network to produce a structurally similar replica, (c)
use ReCoN to produce networks much larger than the original exemplar, and
finally (d) discuss open problems and promising research directions. In a
comparative experimental study, we find that ReCoN is often superior to many
other state-of-the-art network generation methods. We argue that ReCoN is a
scalable and effective tool for modeling a given network while preserving
important properties at both micro- and macroscopic scales, and for scaling the
exemplar data by orders of magnitude in size.Comment: 26 pages, 13 figures, extended version, a preliminary version of the
paper was presented at the 5th International Workshop on Complex Networks and
their Application
Unifying Sparsest Cut, Cluster Deletion, and Modularity Clustering Objectives with Correlation Clustering
Graph clustering, or community detection, is the task of identifying groups
of closely related objects in a large network. In this paper we introduce a new
community-detection framework called LambdaCC that is based on a specially
weighted version of correlation clustering. A key component in our methodology
is a clustering resolution parameter, , which implicitly controls the
size and structure of clusters formed by our framework. We show that, by
increasing this parameter, our objective effectively interpolates between two
different strategies in graph clustering: finding a sparse cut and forming
dense subgraphs. Our methodology unifies and generalizes a number of other
important clustering quality functions including modularity, sparsest cut, and
cluster deletion, and places them all within the context of an optimization
problem that has been well studied from the perspective of approximation
algorithms. Our approach is particularly relevant in the regime of finding
dense clusters, as it leads to a 2-approximation for the cluster deletion
problem. We use our approach to cluster several graphs, including large
collaboration networks and social networks
Quantifying and minimizing risk of conflict in social networks
Controversy, disagreement, conflict, polarization and opinion divergence in social networks have been the subject of much recent research. In particular, researchers have addressed the question of how such concepts can be quantified given people’s prior opinions, and how they can be optimized by influencing the opinion of a small number of people or by editing the network’s connectivity.
Here, rather than optimizing such concepts given a specific set of prior opinions, we study whether they can be optimized in the average case and in the worst case over all sets of prior opinions. In particular, we derive the worst-case and average-case conflict risk of networks, and we propose algorithms for optimizing these.
For some measures of conflict, these are non-convex optimization problems with many local minima. We provide a theoretical and empirical analysis of the nature of some of these local minima, and show how they are related to existing organizational structures.
Empirical results show how a small number of edits quickly decreases its conflict risk, both average-case and worst-case. Furthermore, it shows that minimizing average-case conflict risk often does not reduce worst-case conflict risk. Minimizing worst-case conflict risk on the other hand, while computationally more challenging, is generally effective at minimizing both worst-case as well as average-case conflict risk
Fast Quasi-Threshold Editing
We introduce Quasi-Threshold Mover (QTM), an algorithm to solve the
quasi-threshold (also called trivially perfect) graph editing problem with edge
insertion and deletion. Given a graph it computes a quasi-threshold graph which
is close in terms of edit count. This edit problem is NP-hard. We present an
extensive experimental study, in which we show that QTM is the first algorithm
that is able to scale to large real-world graphs in practice. As a side result
we further present a simple linear-time algorithm for the quasi-threshold
recognition problem.Comment: 26 pages, 4 figures, submitted to ESA 201
- …