236 research outputs found
Average Distance Queries through Weighted Samples in Graphs and Metric Spaces: High Scalability with Tight Statistical Guarantees
The average distance from a node to all other nodes in a graph, or from a
query point in a metric space to a set of points, is a fundamental quantity in
data analysis. The inverse of the average distance, known as the (classic)
closeness centrality of a node, is a popular importance measure in the study of
social networks. We develop novel structural insights on the sparsifiability of
the distance relation via weighted sampling. Based on that, we present highly
practical algorithms with strong statistical guarantees for fundamental
problems. We show that the average distance (and hence the centrality) for all
nodes in a graph can be estimated using single-source
distance computations. For a set of points in a metric space, we show
that after preprocessing which uses distance computations we can compute
a weighted sample of size such that the average
distance from any query point to can be estimated from the distances
from to . Finally, we show that for a set of points in a metric
space, we can estimate the average pairwise distance using
distance computations. The estimate is based on a weighted sample of
pairs of points, which is computed using distance
computations. Our estimates are unbiased with normalized mean square error
(NRMSE) of at most . Increasing the sample size by a
factor ensures that the probability that the relative error exceeds
is polynomially small.Comment: 21 pages, will appear in the Proceedings of RANDOM 201
Consequences of APSP, triangle detection, and 3SUM hardness for separation between determinism and non-determinism
We present implications from the known conjectures like APSP, 3SUM and ETH in
a form of a negated containment of a linear-time with a non-deterministic
logarithmic-bit oracle in a respective deterministic bounded-time class They
are different for different conjectures and they exhibit in particular the
dependency on the input range parameters.Comment: The section on range reduction in the previous version contained a
flaw in a proof and therefore it has been remove
On the Fine-Grained Complexity of Parity Problems
We consider the parity variants of basic problems studied in fine-grained complexity. We show that finding the exact solution is just as hard as finding its parity (i.e. if the solution is even or odd) for a large number of classical problems, including All-Pairs Shortest Paths (APSP), Diameter, Radius, Median, Second Shortest Path, Maximum Consecutive Subsums, Min-Plus Convolution, and 0/1-Knapsack.
A direct reduction from a problem to its parity version is often difficult to design. Instead, we revisit the existing hardness reductions and tailor them in a problem-specific way to the parity version. Nearly all reductions from APSP in the literature proceed via the (subcubic-equivalent but simpler) Negative Weight Triangle (NWT) problem. Our new modified reductions also start from NWT or a non-standard parity variant of it. We are not able to establish a subcubic-equivalence with the more natural parity counting variant of NWT, where we ask if the number of negative triangles is even or odd. Perhaps surprisingly, we justify this by designing a reduction from the seemingly-harder Zero Weight Triangle problem, showing that parity is (conditionally) strictly harder than decision for NWT
A Combinatorial Algorithm for All-Pairs Shortest Paths in Directed Vertex-Weighted Graphs with Applications to Disc Graphs
We consider the problem of computing all-pairs shortest paths in a directed
graph with real weights assigned to vertices.
For an 0-1 matrix let be the complete weighted graph
on the rows of where the weight of an edge between two rows is equal to
their Hamming distance. Let be the weight of a minimum weight spanning
tree of
We show that the all-pairs shortest path problem for a directed graph on
vertices with nonnegative real weights and adjacency matrix can be
solved by a combinatorial randomized algorithm in time
As a corollary, we conclude that the transitive closure of a directed graph
can be computed by a combinatorial randomized algorithm in the
aforementioned time.
We also conclude that the all-pairs shortest path problem for uniform disk
graphs, with nonnegative real vertex weights, induced by point sets of bounded
density within a unit square can be solved in time
Tight Hardness Results for Maximum Weight Rectangles
Given weighted points (positive or negative) in dimensions, what is
the axis-aligned box which maximizes the total weight of the points it
contains?
The best known algorithm for this problem is based on a reduction to a
related problem, the Weighted Depth problem [T. M. Chan, FOCS'13], and runs in
time . It was conjectured [Barbay et al., CCCG'13] that this runtime is
tight up to subpolynomial factors. We answer this conjecture affirmatively by
providing a matching conditional lower bound. We also provide conditional lower
bounds for the special case when points are arranged in a grid (a well studied
problem known as Maximum Subarray problem) as well as for other related
problems.
All our lower bounds are based on assumptions that the best known algorithms
for the All-Pairs Shortest Paths problem (APSP) and for the Max-Weight k-Clique
problem in edge-weighted graphs are essentially optimal
Efficient Parameterized Algorithms for Computing All-Pairs Shortest Paths
Computing all-pairs shortest paths is a fundamental and much-studied problem
with many applications. Unfortunately, despite intense study, there are still
no significantly faster algorithms for it than the time
algorithm due to Floyd and Warshall (1962). Somewhat faster algorithms exist
for the vertex-weighted version if fast matrix multiplication may be used.
Yuster (SODA 2009) gave an algorithm running in time ,
but no combinatorial, truly subcubic algorithm is known.
Motivated by the recent framework of efficient parameterized algorithms (or
"FPT in P"), we investigate the influence of the graph parameters clique-width
() and modular-width () on the running times of algorithms for solving
All-Pairs Shortest Paths. We obtain efficient (and combinatorial) parameterized
algorithms on non-negative vertex-weighted graphs of times
, resp. . If fast matrix
multiplication is allowed then the latter can be improved to
using the algorithm of Yuster as a black box.
The algorithm relative to modular-width is adaptive, meaning that the running
time matches the best unparameterized algorithm for parameter value equal
to , and they outperform them already for for any
On the tractability of some natural packing, covering and partitioning problems
In this paper we fix 7 types of undirected graphs: paths, paths with
prescribed endvertices, circuits, forests, spanning trees, (not necessarily
spanning) trees and cuts. Given an undirected graph and two "object
types" and chosen from the alternatives above, we
consider the following questions. \textbf{Packing problem:} can we find an
object of type and one of type in the edge set of
, so that they are edge-disjoint? \textbf{Partitioning problem:} can we
partition into an object of type and one of type ?
\textbf{Covering problem:} can we cover with an object of type
, and an object of type ? This framework includes 44
natural graph theoretic questions. Some of these problems were well-known
before, for example covering the edge-set of a graph with two spanning trees,
or finding an - path and an - path that are
edge-disjoint. However, many others were not, for example can we find an
- path and a spanning tree that are
edge-disjoint? Most of these previously unknown problems turned out to be
NP-complete, many of them even in planar graphs. This paper determines the
status of these 44 problems. For the NP-complete problems we also investigate
the planar version, for the polynomial problems we consider the matroidal
generalization (wherever this makes sense)
- …