841 research outputs found
Assessing the Computational Complexity of Multi-Layer Subgraph Detection
Multi-layer graphs consist of several graphs (layers) over the same vertex
set. They are motivated by real-world problems where entities (vertices) are
associated via multiple types of relationships (edges in different layers). We
chart the border of computational (in)tractability for the class of subgraph
detection problems on multi-layer graphs, including fundamental problems such
as maximum matching, finding certain clique relaxations (motivated by community
detection), or path problems. Mostly encountering hardness results, sometimes
even for two or three layers, we can also spot some islands of tractability
Core Decomposition in Multilayer Networks: Theory, Algorithms, and Applications
Multilayer networks are a powerful paradigm to model complex systems, where
multiple relations occur between the same entities. Despite the keen interest
in a variety of tasks, algorithms, and analyses in this type of network, the
problem of extracting dense subgraphs has remained largely unexplored so far.
In this work we study the problem of core decomposition of a multilayer
network. The multilayer context is much challenging as no total order exists
among multilayer cores; rather, they form a lattice whose size is exponential
in the number of layers. In this setting we devise three algorithms which
differ in the way they visit the core lattice and in their pruning techniques.
We then move a step forward and study the problem of extracting the
inner-most (also known as maximal) cores, i.e., the cores that are not
dominated by any other core in terms of their core index in all the layers.
Inner-most cores are typically orders of magnitude less than all the cores.
Motivated by this, we devise an algorithm that effectively exploits the
maximality property and extracts inner-most cores directly, without first
computing a complete decomposition.
Finally, we showcase the multilayer core-decomposition tool in a variety of
scenarios and problems. We start by considering the problem of densest-subgraph
extraction in multilayer networks. We introduce a definition of multilayer
densest subgraph that trades-off between high density and number of layers in
which the high density holds, and exploit multilayer core decomposition to
approximate this problem with quality guarantees. As further applications, we
show how to utilize multilayer core decomposition to speed-up the extraction of
frequent cross-graph quasi-cliques and to generalize the community-search
problem to the multilayer setting
Mining Dense Subgraphs with Similar Edges
When searching for interesting structures in graphs, it is often important to
take into account not only the graph connectivity, but also the metadata
available, such as node and edge labels, or temporal information. In this paper
we are interested in settings where such metadata is used to define a
similarity between edges. We consider the problem of finding subgraphs that are
dense and whose edges are similar to each other with respect to a given
similarity function. Depending on the application, this function can be, for
example, the Jaccard similarity between the edge label sets, or the temporal
correlation of the edge occurrences in a temporal graph. We formulate a
Lagrangian relaxation-based optimization problem to search for dense subgraphs
with high pairwise edge similarity. We design a novel algorithm to solve the
problem through parametric MinCut, and provide an efficient search scheme to
iterate through the values of the Lagrangian multipliers. Our study is
complemented by an evaluation on real-world datasets, which demonstrates the
usefulness and efficiency of the proposed approach
Sparse Learning over Infinite Subgraph Features
We present a supervised-learning algorithm from graph data (a set of graphs)
for arbitrary twice-differentiable loss functions and sparse linear models over
all possible subgraph features. To date, it has been shown that under all
possible subgraph features, several types of sparse learning, such as Adaboost,
LPBoost, LARS/LASSO, and sparse PLS regression, can be performed. Particularly
emphasis is placed on simultaneous learning of relevant features from an
infinite set of candidates. We first generalize techniques used in all these
preceding studies to derive an unifying bounding technique for arbitrary
separable functions. We then carefully use this bounding to make block
coordinate gradient descent feasible over infinite subgraph features, resulting
in a fast converging algorithm that can solve a wider class of sparse learning
problems over graph data. We also empirically study the differences from the
existing approaches in convergence property, selected subgraph features, and
search-space sizes. We further discuss several unnoticed issues in sparse
learning over all possible subgraph features.Comment: 42 pages, 24 figures, 4 table
- …