19 research outputs found
Improving information centrality of a node in complex networks by adding edges
The problem of increasing the centrality of a network node arises in many
practical applications. In this paper, we study the optimization problem of
maximizing the information centrality of a given node in a network
with nodes and edges, by creating new edges incident to . Since
is the reciprocal of the sum of resistance distance
between and all nodes, we alternatively consider the problem of minimizing
by adding new edges linked to . We show that the
objective function is monotone and supermodular. We provide a simple greedy
algorithm with an approximation factor and
running time. To speed up the computation, we also present an
algorithm to compute -approximate
resistance distance after iteratively adding edges, the
running time of which is for any
, where the notation suppresses the factors. We experimentally demonstrate the effectiveness and
efficiency of our proposed algorithms.Comment: 7 pages, 2 figures, ijcai-201
Kirchhoff Index As a Measure of Edge Centrality in Weighted Networks: Nearly Linear Time Algorithms
Most previous work of centralities focuses on metrics of vertex importance
and methods for identifying powerful vertices, while related work for edges is
much lesser, especially for weighted networks, due to the computational
challenge. In this paper, we propose to use the well-known Kirchhoff index as
the measure of edge centrality in weighted networks, called -Kirchhoff
edge centrality. The Kirchhoff index of a network is defined as the sum of
effective resistances over all vertex pairs. The centrality of an edge is
reflected in the increase of Kirchhoff index of the network when the edge
is partially deactivated, characterized by a parameter . We define two
equivalent measures for -Kirchhoff edge centrality. Both are global
metrics and have a better discriminating power than commonly used measures,
based on local or partial structural information of networks, e.g. edge
betweenness and spanning edge centrality.
Despite the strong advantages of Kirchhoff index as a centrality measure and
its wide applications, computing the exact value of Kirchhoff edge centrality
for each edge in a graph is computationally demanding. To solve this problem,
for each of the -Kirchhoff edge centrality metrics, we present an
efficient algorithm to compute its -approximation for all the
edges in nearly linear time in . The proposed -Kirchhoff edge
centrality is the first global metric of edge importance that can be provably
approximated in nearly-linear time. Moreover, according to the
-Kirchhoff edge centrality, we present a -Kirchhoff vertex
centrality measure, as well as a fast algorithm that can compute
-approximate Kirchhoff vertex centrality for all the vertices in
nearly linear time in
Scalable Algorithms for the Analysis of Massive Networks
Die Netzwerkanalyse zielt darauf ab, nicht-triviale Erkenntnisse aus vernetzten Daten zu gewinnen. Beispiele für diese Erkenntnisse sind die Wichtigkeit einer Entität im Verhältnis zu anderen nach bestimmten Kriterien oder das Finden des am besten geeigneten Partners für jeden Teilnehmer eines Netzwerks - bekannt als Maximum Weighted Matching (MWM).
Da der Begriff der Wichtigkeit an die zu betrachtende Anwendung gebunden ist, wurden zahlreiche Zentralitätsmaße eingeführt. Diese Maße stammen hierbei aus Jahrzehnten, in denen die Rechenleistung sehr begrenzt war und die Netzwerke im Vergleich zu heute viel kleiner waren. Heute sind massive Netzwerke mit Millionen von Kanten allgegenwärtig und eine triviale Berechnung von Zentralitätsmaßen ist oft zu zeitaufwändig. Darüber hinaus ist die Suche nach der Gruppe von k Knoten mit hoher Zentralität eine noch kostspieligere Aufgabe. Skalierbare Algorithmen zur Identifizierung hochzentraler (Gruppen von) Knoten in großen Graphen sind von großer Bedeutung für eine umfassende Netzwerkanalyse.
Heutigen Netzwerke verändern sich zusätzlich im zeitlichen Verlauf und die effiziente Aktualisierung der Ergebnisse nach einer Änderung ist eine Herausforderung. Effiziente dynamische Algorithmen sind daher ein weiterer wesentlicher Bestandteil moderner Analyse-Pipelines.
Hauptziel dieser Arbeit ist es, skalierbare algorithmische Lösungen für die zwei oben genannten Probleme zu finden. Die meisten unserer Algorithmen benötigen Sekunden bis einige Minuten, um diese Aufgaben in realen Netzwerken mit bis zu Hunderten Millionen von Kanten zu lösen, was eine deutliche Verbesserung gegenüber dem Stand der Technik darstellt. Außerdem erweitern wir einen modernen Algorithmus für MWM auf dynamische Graphen. Experimente zeigen, dass unser dynamischer MWM-Algorithmus Aktualisierungen in Graphen mit Milliarden von Kanten in Millisekunden bewältigt.Network analysis aims to unveil non-trivial insights from networked data by studying relationship patterns between the entities of a network. Among these insights, a popular one is to quantify the importance of an entity with respect to the others according to some criteria. Another one is to find the most suitable matching partner for each participant of a network knowing the pairwise preferences of the participants to be matched with each other - known as Maximum Weighted Matching (MWM).
Since the notion of importance is tied to the application under consideration, numerous centrality measures have been introduced. Many of these measures, however, were conceived in a time when computing power was very limited and networks were much smaller compared to today's, and thus scalability to large datasets was not considered. Today, massive networks with millions of edges are ubiquitous, and a complete exact computation for traditional centrality measures are often too time-consuming. This issue is amplified if our objective is to find the group of k vertices that is the most central as a group. Scalable algorithms to identify highly central (groups of) vertices on massive graphs are thus of pivotal importance for large-scale network analysis.
In addition to their size, today's networks often evolve over time, which poses the challenge of efficiently updating results after a change occurs. Hence, efficient dynamic algorithms are essential for modern network analysis pipelines.
In this work, we propose scalable algorithms for identifying important vertices in a network, and for efficiently updating them in evolving networks. In real-world graphs with hundreds of millions of edges, most of our algorithms require seconds to a few minutes to perform these tasks. Further, we extend a state-of-the-art algorithm for MWM to dynamic graphs. Experiments show that our dynamic MWM algorithm handles updates in graphs with billion edges in milliseconds
Scaling up Group Closeness Maximization
Closeness is a widely-used centrality measure in social network analysis. For a node it indicates the inverse average shortest-path distance to the other nodes of the network. While the identification of the k nodes with highest closeness received significant attention, many applications are actually interested in finding a group of nodes that is central as a whole. For this problem, only recently a greedy algorithm with approximation ratio (1−1/e) has been proposed [Chen et al., ADC 2016]. Since this algorithm’s running time is still expensive for large networks, a heuristic without approximation guarantee has also been proposed in the same paper.
In the present paper we develop new techniques to speed up the greedy algorithm without losing its theoretical guarantee. Compared to a straightforward implementation, our approach is orders of magnitude faster and, compared to the heuristic proposed by Chen et al., we always find a solution with better quality in a comparable running time in our experiments.
Our method Greedy++ allows us to approximate the group with maximum closeness on networks with up to hundreds of millions of edges in minutes or at most a few hours. To have the same theoretical guarantee, the greedy approach by [Chen et al., ADC 2016] would take several days already on networks with hundreds of thousands of edges.
In a comparison with the optimum, our experiments show that the solution found by Greedy++ is actually much better than the theoretical guarantee. Over all tested networks, the empirical approximation ratio is never lower than 0.97.
Finally, we study for the first time the correlation between the top-k nodes with highest closeness and an approximation of the most central group in large complex networks and show that the overlap between the two is relatively small
A discrete graph Laplacian for signal processing
In this thesis we exploit diffusion processes on graphs to effect two fundamental problems of image processing: denoising and segmentation. We treat these two low-level vision problems on the pixel-wise level under a unified framework: a graph embedding. Using this framework opens us up to the possibilities of exploiting recently introduced algorithms from the semi-supervised machine learning literature.
We contribute two novel edge-preserving smoothing algorithms to the literature. Furthermore we apply these edge-preserving smoothing algorithms to some computational photography tasks. Many recent computational photography tasks require the decomposition of an image into a smooth base layer containing large scale intensity variations and a residual layer capturing fine details. Edge-preserving smoothing is the main computational mechanism in producing these multi-scale image representations. We, in effect, introduce a new approach to edge-preserving multi-scale image decompositions. Where as prior approaches such as the Bilateral filter and weighted-least squares methods require multiple parameters to tune the response of the filters our method only requires one. This parameter can be interpreted as a scale parameter. We demonstrate the utility of our approach by applying the method to computational photography tasks that utilise multi-scale image decompositions.
With minimal modification to these edge-preserving smoothing algorithms we show that we can extend them to produce interactive image segmentation. As a result the operations of segmentation and denoising are conducted under a unified framework. Moreover we discuss how our method is related to region based active contours. We benchmark our proposed interactive segmentation algorithms against those based upon energy-minimisation, specifically graph-cut methods. We demonstrate that we achieve competitive performance
Graph Sparsification, Spectral Sketches, and Faster Resistance Computation, via Short Cycle Decompositions
We develop a framework for graph sparsification and sketching, based on a new
tool, short cycle decomposition -- a decomposition of an unweighted graph into
an edge-disjoint collection of short cycles, plus few extra edges. A simple
observation gives that every graph G on n vertices with m edges can be
decomposed in time into cycles of length at most , and at most
extra edges. We give an time algorithm for constructing a
short cycle decomposition, with cycles of length , and
extra edges. These decompositions enable us to make progress on several open
questions:
* We give an algorithm to find -approximations to effective
resistances of all edges in time , improving over
the previous best of .
This gives an algorithm to approximate the determinant of a Laplacian up to
in time.
* We show existence and efficient algorithms for constructing graphical
spectral sketches -- a distribution over sparse graphs H such that for a fixed
vector , we have w.h.p. and
. This implies the existence of
resistance-sparsifiers with about edges that preserve the
effective resistances between every pair of vertices up to
* By combining short cycle decompositions with known tools in graph
sparsification, we show the existence of nearly-linear sized degree-preserving
spectral sparsifiers, as well as significantly sparser approximations of
directed graphs. The latter is critical to recent breakthroughs on faster
algorithms for solving linear systems in directed Laplacians.
Improved algorithms for constructing short cycle decompositions will lead to
improvements for each of the above results.Comment: 80 page