209 research outputs found
Fast approximation of centrality and distances in hyperbolic graphs
We show that the eccentricities (and thus the centrality indices) of all
vertices of a -hyperbolic graph can be computed in linear
time with an additive one-sided error of at most , i.e., after a
linear time preprocessing, for every vertex of one can compute in
time an estimate of its eccentricity such that
for a small constant . We
prove that every -hyperbolic graph has a shortest path tree,
constructible in linear time, such that for every vertex of ,
. These results are based on an
interesting monotonicity property of the eccentricity function of hyperbolic
graphs: the closer a vertex is to the center of , the smaller its
eccentricity is. We also show that the distance matrix of with an additive
one-sided error of at most can be computed in
time, where is a small constant. Recent empirical studies show that
many real-world graphs (including Internet application networks, web networks,
collaboration networks, social networks, biological networks, and others) have
small hyperbolicity. So, we analyze the performance of our algorithms for
approximating centrality and distance matrix on a number of real-world
networks. Our experimental results show that the obtained estimates are even
better than the theoretical bounds.Comment: arXiv admin note: text overlap with arXiv:1506.01799 by other author
Approximation Algorithms for Min-Distance Problems in DAGs
Graph parameters such as the diameter, radius, and vertex eccentricities are not defined in a useful way in Directed Acyclic Graphs (DAGs) using the standard measure of distance, since for any two nodes, there is no path between them in one of the two directions. So it is natural to consider the distance between two nodes as the length of the shortest path in the direction in which this path exists, motivating the definition of the min-distance. The min-distance between two nodes u and v is the minimum of the shortest path distances from u to v and from v to u.
As with the standard distance problems, the Strong Exponential Time Hypothesis [Impagliazzo-Paturi-Zane 2001, Calabro-Impagliazzo-Paturi 2009] leaves little hope for computing min-distance problems faster than computing All Pairs Shortest Paths, which can be solved in O?(mn) time. So it is natural to resort to approximation algorithms in O?(mn^{1-?}) time for some positive ?. Abboud, Vassilevska W., and Wang [SODA 2016] first studied min-distance problems achieving constant factor approximation algorithms on DAGs, and Dalirrooyfard et al [ICALP 2019] gave the first constant factor approximation algorithms on general graphs for min-diameter, min-radius and min-eccentricities. Abboud et al obtained a 3-approximation algorithm for min-radius on DAGs which works in O?(m?n) time, and showed that any (2-?)-approximation requires n^{2-o(1)} time for any ? > 0, under the Hitting Set Conjecture. We close the gap, obtaining a 2-approximation algorithm which runs in O?(m?n) time. As the lower bound of Abboud et al only works for sparse DAGs, we further show that our algorithm is conditionally tight for dense DAGs using a reduction from Boolean matrix multiplication. Moreover, Abboud et al obtained a linear time 2-approximation algorithm for min-diameter along with a lower bound stating that any (3/2-?)-approximation algorithm for sparse DAGs requires n^{2-o(1)} time under SETH. We close this gap for dense DAGs by obtaining a 3/2-approximation algorithm which works in O(n^{2.350}) time and showing that the approximation factor is unlikely to be improved within O(n^{? - o(1)}) time under the high dimensional Orthogonal Vectors Conjecture, where ? is the matrix multiplication exponent
Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk)
Algorithmic research strives to develop fast algorithms for fundamental problems. Despite its many successes, however, many problems still do not have very efficient algorithms. For years researchers have explained the hardness for key problems by proving NP-hardness, utilizing polynomial time reductions to base the hardness of key problems on the famous conjecture P != NP. For problems that already have polynomial time algorithms, however, it does not seem that one can show any sort of hardness based on P != NP. Nevertheless, we would like to provide evidence that a problem with a running time O(n^k) that has not been improved in decades, also requires n^{k-o(1)} time, thus explaining the lack of progress on the problem. Such unconditional time lower bounds seem very difficult to obtain, unfortunately. Recent work has concentrated on an approach mimicking NP-hardness: (1) select a few key problems that are conjectured to require T(n) time to solve, (2) use special, fine-grained reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of the key problems. In this abstract we outline the approach, give some examples of hardness results based on the Strong Exponential Time Hypothesis, and present an overview of some of the recent work on the topic
Hierarchical Time-Dependent Oracles
We study networks obeying \emph{time-dependent} min-cost path metrics, and
present novel oracles for them which \emph{provably} achieve two unique
features: % (i) \emph{subquadratic} preprocessing time and space,
\emph{independent} of the metric's amount of disconcavity; % (ii)
\emph{sublinear} query time, in either the network size or the actual
Dijkstra-Rank of the query at hand
- …