48,929 research outputs found
Bicriteria Network Design Problems
We study a general class of bicriteria network design problems. A generic
problem in this class is as follows: Given an undirected graph and two
minimization objectives (under different cost functions), with a budget
specified on the first, find a <subgraph \from a given subgraph-class that
minimizes the second objective subject to the budget on the first. We consider
three different criteria - the total edge cost, the diameter and the maximum
degree of the network. Here, we present the first polynomial-time approximation
algorithms for a large class of bicriteria network design problems for the
above mentioned criteria. The following general types of results are presented.
First, we develop a framework for bicriteria problems and their
approximations. Second, when the two criteria are the same %(note that the cost
functions continue to be different) we present a ``black box'' parametric
search technique. This black box takes in as input an (approximation) algorithm
for the unicriterion situation and generates an approximation algorithm for the
bicriteria case with only a constant factor loss in the performance guarantee.
Third, when the two criteria are the diameter and the total edge costs we use a
cluster-based approach to devise a approximation algorithms --- the solutions
output violate both the criteria by a logarithmic factor. Finally, for the
class of treewidth-bounded graphs, we provide pseudopolynomial-time algorithms
for a number of bicriteria problems using dynamic programming. We show how
these pseudopolynomial-time algorithms can be converted to fully
polynomial-time approximation schemes using a scaling technique.Comment: 24 pages 1 figur
Using Graph Properties to Speed-up GPU-based Graph Traversal: A Model-driven Approach
While it is well-known and acknowledged that the performance of graph
algorithms is heavily dependent on the input data, there has been surprisingly
little research to quantify and predict the impact the graph structure has on
performance. Parallel graph algorithms, running on many-core systems such as
GPUs, are no exception: most research has focused on how to efficiently
implement and tune different graph operations on a specific GPU. However, the
performance impact of the input graph has only been taken into account
indirectly as a result of the graphs used to benchmark the system.
In this work, we present a case study investigating how to use the properties
of the input graph to improve the performance of the breadth-first search (BFS)
graph traversal. To do so, we first study the performance variation of 15
different BFS implementations across 248 graphs. Using this performance data,
we show that significant speed-up can be achieved by combining the best
implementation for each level of the traversal. To make use of this
data-dependent optimization, we must correctly predict the relative performance
of algorithms per graph level, and enable dynamic switching to the optimal
algorithm for each level at runtime.
We use the collected performance data to train a binary decision tree, to
enable high-accuracy predictions and fast switching. We demonstrate empirically
that our decision tree is both fast enough to allow dynamic switching between
implementations, without noticeable overhead, and accurate enough in its
prediction to enable significant BFS speedup. We conclude that our model-driven
approach (1) enables BFS to outperform state of the art GPU algorithms, and (2)
can be adapted for other BFS variants, other algorithms, or more specific
datasets
Parallel Batch-Dynamic Graph Connectivity
In this paper, we study batch parallel algorithms for the dynamic
connectivity problem, a fundamental problem that has received considerable
attention in the sequential setting. The most well known sequential algorithm
for dynamic connectivity is the elegant level-set algorithm of Holm, de
Lichtenberg and Thorup (HDT), which achieves amortized time per
edge insertion or deletion, and time per query. We
design a parallel batch-dynamic connectivity algorithm that is work-efficient
with respect to the HDT algorithm for small batch sizes, and is asymptotically
faster when the average batch size is sufficiently large. Given a sequence of
batched updates, where is the average batch size of all deletions, our
algorithm achieves expected amortized work per
edge insertion and deletion and depth w.h.p. Our algorithm
answers a batch of connectivity queries in expected
work and depth w.h.p. To the best of our knowledge, our algorithm
is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
Parallel Graph Decompositions Using Random Shifts
We show an improved parallel algorithm for decomposing an undirected
unweighted graph into small diameter pieces with a small fraction of the edges
in between. These decompositions form critical subroutines in a number of graph
algorithms. Our algorithm builds upon the shifted shortest path approach
introduced in [Blelloch, Gupta, Koutis, Miller, Peng, Tangwongsan, SPAA 2011].
By combining various stages of the previous algorithm, we obtain a
significantly simpler algorithm with the same asymptotic guarantees as the best
sequential algorithm
Balancing Minimum Spanning and Shortest Path Trees
This paper give a simple linear-time algorithm that, given a weighted
digraph, finds a spanning tree that simultaneously approximates a shortest-path
tree and a minimum spanning tree. The algorithm provides a continuous
trade-off: given the two trees and epsilon > 0, the algorithm returns a
spanning tree in which the distance between any vertex and the root of the
shortest-path tree is at most 1+epsilon times the shortest-path distance, and
yet the total weight of the tree is at most 1+2/epsilon times the weight of a
minimum spanning tree. This is the best tradeoff possible. The paper also
describes a fast parallel implementation.Comment: conference version: ACM-SIAM Symposium on Discrete Algorithms (1993
Resolution Trees with Lemmas: Resolution Refinements that Characterize DLL Algorithms with Clause Learning
Resolution refinements called w-resolution trees with lemmas (WRTL) and with
input lemmas (WRTI) are introduced. Dag-like resolution is equivalent to both
WRTL and WRTI when there is no regularity condition. For regular proofs, an
exponential separation between regular dag-like resolution and both regular
WRTL and regular WRTI is given.
It is proved that DLL proof search algorithms that use clause learning based
on unit propagation can be polynomially simulated by regular WRTI. More
generally, non-greedy DLL algorithms with learning by unit propagation are
equivalent to regular WRTI. A general form of clause learning, called
DLL-Learn, is defined that is equivalent to regular WRTL.
A variable extension method is used to give simulations of resolution by
regular WRTI, using a simplified form of proof trace extensions. DLL-Learn and
non-greedy DLL algorithms with learning by unit propagation can use variable
extensions to simulate general resolution without doing restarts.
Finally, an exponential lower bound for WRTL where the lemmas are restricted
to short clauses is shown
- …