790 research outputs found
Braess's Paradox in Wireless Networks: The Danger of Improved Technology
When comparing new wireless technologies, it is common to consider the effect
that they have on the capacity of the network (defined as the maximum number of
simultaneously satisfiable links). For example, it has been shown that giving
receivers the ability to do interference cancellation, or allowing transmitters
to use power control, never decreases the capacity and can in certain cases
increase it by , where is the
ratio of the longest link length to the smallest transmitter-receiver distance
and is the maximum transmission power. But there is no reason to
expect the optimal capacity to be realized in practice, particularly since
maximizing the capacity is known to be NP-hard. In reality, we would expect
links to behave as self-interested agents, and thus when introducing a new
technology it makes more sense to compare the values reached at game-theoretic
equilibria than the optimum values.
In this paper we initiate this line of work by comparing various notions of
equilibria (particularly Nash equilibria and no-regret behavior) when using a
supposedly "better" technology. We show a version of Braess's Paradox for all
of them: in certain networks, upgrading technology can actually make the
equilibria \emph{worse}, despite an increase in the capacity. We construct
instances where this decrease is a constant factor for power control,
interference cancellation, and improvements in the SINR threshold (),
and is when power control is combined with interference
cancellation. However, we show that these examples are basically tight: the
decrease is at most O(1) for power control, interference cancellation, and
improved , and is at most when power control is
combined with interference cancellation
The Densest k-Subhypergraph Problem
The Densest -Subgraph (DS) problem, and its corresponding minimization
problem Smallest -Edge Subgraph (SES), have come to play a central role
in approximation algorithms. This is due both to their practical importance,
and their usefulness as a tool for solving and establishing approximation
bounds for other problems. These two problems are not well understood, and it
is widely believed that they do not an admit a subpolynomial approximation
ratio (although the best known hardness results do not rule this out).
In this paper we generalize both DS and SES from graphs to hypergraphs.
We consider the Densest -Subhypergraph problem (given a hypergraph ,
find a subset of vertices so as to maximize the number of
hyperedges contained in ) and define the Minimum -Union problem (given a
hypergraph, choose of the hyperedges so as to minimize the number of
vertices in their union). We focus in particular on the case where all
hyperedges have size 3, as this is the simplest non-graph setting. For this
case we provide an -approximation (for arbitrary constant )
for Densest -Subhypergraph and an -approximation for
Minimum -Union. We also give an -approximation for Minimum
-Union in general hypergraphs. Finally, we examine the interesting special
case of interval hypergraphs (instances where the vertices are a subset of the
natural numbers and the hyperedges are intervals of the line) and prove that
both problems admit an exact polynomial time solution on these instances.Comment: 21 page
The Complexity of Planning Problems With Simple Causal Graphs
We present three new complexity results for classes of planning problems with
simple causal graphs. First, we describe a polynomial-time algorithm that uses
macros to generate plans for the class 3S of planning problems with binary
state variables and acyclic causal graphs. This implies that plan generation
may be tractable even when a planning problem has an exponentially long minimal
solution. We also prove that the problem of plan existence for planning
problems with multi-valued variables and chain causal graphs is NP-hard.
Finally, we show that plan existence for planning problems with binary state
variables and polytree causal graphs is NP-complete
Small Cuts and Connectivity Certificates: A Fault Tolerant Approach
We revisit classical connectivity problems in the {CONGEST} model of distributed computing. By using techniques from fault tolerant network design, we show improved constructions, some of which are even "local" (i.e., with O~(1) rounds) for problems that are closely related to hard global problems (i.e., with a lower bound of Omega(Diam+sqrt{n}) rounds).
Distributed Minimum Cut: Nanongkai and Su presented a randomized algorithm for computing a (1+epsilon)-approximation of the minimum cut using O~(D +sqrt{n}) rounds where D is the diameter of the graph. For a sufficiently large minimum cut lambda=Omega(sqrt{n}), this is tight due to Das Sarma et al. [FOCS \u2711], Ghaffari and Kuhn [DISC \u2713].
- Small Cuts: A special setting that remains open is where the graph connectivity lambda is small (i.e., constant). The only lower bound for this case is Omega(D), with a matching bound known only for lambda <= 2 due to Pritchard and Thurimella [TALG \u2711]. Recently, Daga, Henzinger, Nanongkai and Saranurak [STOC \u2719] raised the open problem of computing the minimum cut in poly(D) rounds for any lambda=O(1). In this paper, we resolve this problem by presenting a surprisingly simple algorithm, that takes a completely different approach than the existing algorithms. Our algorithm has also the benefit that it computes all minimum cuts in the graph, and naturally extends to vertex cuts as well. At the heart of the algorithm is a graph sampling approach usually used in the context of fault tolerant (FT) design.
- Deterministic Algorithms: While the existing distributed minimum cut algorithms are randomized, our algorithm can be made deterministic within the same round complexity. To obtain this, we introduce a novel definition of universal sets along with their efficient computation. This allows us to derandomize the FT graph sampling technique, which might be of independent interest.
- Computation of all Edge Connectivities: We also consider the more general task of computing the edge connectivity of all the edges in the graph. In the output format, it is required that the endpoints u,v of every edge (u,v) learn the cardinality of the u-v cut in the graph. We provide the first sublinear algorithm for this problem for the case of constant connectivity values. Specifically, by using the recent notion of low-congestion cycle cover, combined with the sampling technique, we compute all edge connectivities in poly(D) * 2^{O(sqrt{log n log log n})} rounds.
Sparse Certificates: For an n-vertex graph G and an integer lambda, a lambda-sparse certificate H is a subgraph H subseteq G with O(lambda n) edges which is lambda-connected iff G is lambda-connected. For D-diameter graphs, constructions of sparse certificates for lambda in {2,3} have been provided by Thurimella [J. Alg. \u2797] and Dori [PODC \u2718] respectively using O~(D) number of rounds. The problem of devising such certificates with o(D+sqrt{n}) rounds was left open by Dori [PODC \u2718] for any lambda >= 4. Using connections to fault tolerant spanners, we considerably improve the round complexity for any lambda in [1,n] and epsilon in (0,1), by showing a construction of (1-epsilon)lambda-sparse certificates with O(lambda n) edges using only O(1/epsilon^2 * log^{2+o(1)} n) rounds
Approximating Approximate Distance Oracles
Given a finite metric space (V,d), an approximate distance oracle is a data structure which, when queried on two points u,v in V, returns an approximation to the the actual distance between u and v which is within some bounded stretch factor of the true distance. There has been significant work on the tradeoff between the important parameters of approximate distance oracles (and in particular between the size, stretch, and query time), but in this paper we take a different point of view, that of per-instance optimization. If we are given an particular input metric space and stretch bound, can we find the smallest possible approximate distance oracle for that particular input? Since this question is not even well-defined, we restrict our attention to well-known classes of approximate distance oracles, and study whether we can optimize over those classes.
In particular, we give an O(log n)-approximation to the problem of finding the smallest stretch 3 Thorup-Zwick distance oracle, as well as the problem of finding the smallest Pv{a}trac{s}cu-Roditty distance oracle. We also prove a matching Omega(log n) lower bound for both problems, and an Omega(n^{frac{1}{k}-frac{1}{2^{k-1}}}) integrality gap for the more general stretch (2k-1) Thorup-Zwick distance oracle. We also consider the problem of approximating the best TZ or PR approximate distance oracle with outliers, and show that more advanced techniques (SDP relaxations in particular) allow us to optimize even in the presence of outliers
Efficiently computing maximum flows in scale-free networks
We study the maximum-flow/minimum-cut problem on scale-free networks, i.e., graphs whose degree distribution follows a power-law. We propose a simple algorithm that capitalizes on the fact that often only a small fraction of such a network is relevant for the flow. At its core, our algorithm augments Dinitzâs algorithm with a balanced bidirectional search. Our experiments on a scale-free random network model indicate sublinear run time. On scale-free real-world networks, we outperform the commonly used highest-label Push-Relabel implementation by up to two orders of magnitude. Compared to Dinitzâs original algorithm, our modifications reduce the search space, e.g., by a factor of 275 on an autonomous systems graph.
Beyond these good run times, our algorithm has an additional advantage compared to Push-Relabel. The latter computes a preflow, which makes the extraction of a minimum cut potentially more difficult. This is relevant, for example, for the computation of Gomory-Hu trees. On a social network with 70000 nodes, our algorithm computes the Gomory-Hu tree in 3 seconds compared to 12 minutes when using Push-Relabel
- âŠ