374,277 research outputs found
Computing Approximate Equilibria in Weighted Congestion Games via Best-Responses
We present a deterministic polynomial-time algorithm for computing
-approximate (pure) Nash equilibria in weighted congestion games
with polynomial cost functions of degree at most . This is an exponential
improvement of the approximation factor with respect to the previously best
deterministic algorithm. An appealing additional feature of our algorithm is
that it uses only best-improvement steps in the actual game, as opposed to
earlier approaches that first had to transform the game itself. Our algorithm
is an adaptation of the seminal algorithm by Caragiannis et al. [FOCS'11, TEAC
2015], but we utilize an approximate potential function directly on the
original game instead of an exact one on a modified game.
A critical component of our analysis, which is of independent interest, is
the derivation of a novel bound of for the
Price of Anarchy (PoA) of -approximate equilibria in weighted congestion
games, where is the Lambert-W function. More specifically, we
show that this PoA is exactly equal to , where
is the unique positive solution of the equation . Our upper bound is derived via a smoothness-like argument,
and thus holds even for mixed Nash and correlated equilibria, while our lower
bound is simple enough to apply even to singleton congestion games
Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time
In the decremental single-source shortest paths (SSSP) problem we want to
maintain the distances between a given source node and every other node in
an -node -edge graph undergoing edge deletions. While its static
counterpart can be solved in near-linear time, this decremental problem is much
more challenging even in the undirected unweighted case. In this case, the
classic total update time of Even and Shiloach [JACM 1981] has been the
fastest known algorithm for three decades. At the cost of a
-approximation factor, the running time was recently improved to
by Bernstein and Roditty [SODA 2011]. In this paper, we bring the
running time down to near-linear: We give a -approximation
algorithm with expected total update time, thus obtaining
near-linear time. Moreover, we obtain time for the weighted
case, where the edge weights are integers from to . The only prior work
on weighted graphs in time is the -time algorithm by
Henzinger et al. [STOC 2014, ICALP 2015] which works for directed graphs with
quasi-polynomial edge weights. The expected running time bound of our algorithm
holds against an oblivious adversary.
In contrast to the previous results which rely on maintaining a sparse
emulator, our algorithm relies on maintaining a so-called sparse -hop set introduced by Cohen [JACM 2000] in the PRAM literature. An
-hop set of a graph is a set of weighted edges
such that the distance between any pair of nodes in can be
-approximated by their -hop distance (given by a path
containing at most edges) on . Our algorithm can maintain
an -hop set of near-linear size in near-linear time under
edge deletions.Comment: Accepted to Journal of the ACM. A preliminary version of this paper
was presented at the 55th IEEE Symposium on Foundations of Computer Science
(FOCS 2014). Abstract shortened to respect the arXiv limit of 1920 character
Efficient Algorithms and Hardness Results for the Weighted -Server Problem
In this paper, we study the weighted -server problem on the uniform metric
in both the offline and online settings. We start with the offline setting. In
contrast to the (unweighted) -server problem which has a polynomial-time
solution using min-cost flows, there are strong computational lower bounds for
the weighted -server problem, even on the uniform metric. Specifically, we
show that assuming the unique games conjecture, there are no polynomial-time
algorithms with a sub-polynomial approximation factor, even if we use
-resource augmentation for . Furthermore, if we consider the natural
LP relaxation of the problem, then obtaining a bounded integrality gap requires
us to use at least resource augmentation, where is the number of
distinct server weights. We complement these results by obtaining a
constant-approximation algorithm via LP rounding, with a resource augmentation
of for any constant .
In the online setting, an lower bound is known for the competitive
ratio of any randomized algorithm for the weighted -server problem on the
uniform metric. In contrast, we show that -resource augmentation can
bring the competitive ratio down by an exponential factor to only . Our online algorithm uses the two-stage approach of first
obtaining a fractional solution using the online primal-dual framework, and
then rounding it online.Comment: This paper will appear in the proceedings of APPROX 202
A Satisfiability Algorithm for Sparse Depth Two Threshold Circuits
We give a nontrivial algorithm for the satisfiability problem for cn-wire
threshold circuits of depth two which is better than exhaustive search by a
factor 2^{sn} where s= 1/c^{O(c^2)}. We believe that this is the first
nontrivial satisfiability algorithm for cn-wire threshold circuits of depth
two. The independently interesting problem of the feasibility of sparse 0-1
integer linear programs is a special case. To our knowledge, our algorithm is
the first to achieve constant savings even for the special case of Integer
Linear Programming. The key idea is to reduce the satisfiability problem to the
Vector Domination Problem, the problem of checking whether there are two
vectors in a given collection of vectors such that one dominates the other
component-wise.
We also provide a satisfiability algorithm with constant savings for depth
two circuits with symmetric gates where the total weighted fan-in is at most
cn.
One of our motivations is proving strong lower bounds for TC^0 circuits,
exploiting the connection (established by Williams) between satisfiability
algorithms and lower bounds. Our second motivation is to explore the connection
between the expressive power of the circuits and the complexity of the
corresponding circuit satisfiability problem
Constant Factor Approximation for Capacitated k-Center with Outliers
The -center problem is a classic facility location problem, where given an
edge-weighted graph one is to find a subset of vertices ,
such that each vertex in is "close" to some vertex in . The
approximation status of this basic problem is well understood, as a simple
2-approximation algorithm is known to be tight. Consequently different
extensions were studied.
In the capacitated version of the problem each vertex is assigned a capacity,
which is a strict upper bound on the number of clients a facility can serve,
when located at this vertex. A constant factor approximation for the
capacitated -center was obtained last year by Cygan, Hajiaghayi and Khuller
[FOCS'12], which was recently improved to a 9-approximation by An, Bhaskara and
Svensson [arXiv'13].
In a different generalization of the problem some clients (denoted as
outliers) may be disregarded. Here we are additionally given an integer and
the goal is to serve exactly clients, which the algorithm is free to
choose. In 2001 Charikar et al. [SODA'01] presented a 3-approximation for the
-center problem with outliers.
In this paper we consider a common generalization of the two extensions
previously studied separately, i.e. we work with the capacitated -center
with outliers. We present the first constant factor approximation algorithm
with approximation ratio of 25 even for the case of non-uniform hard
capacities.Comment: 15 pages, 3 figures, accepted to STACS 201
QoS Constrained Optimal Sink and Relay Placement in Planned Wireless Sensor Networks
We are given a set of sensors at given locations, a set of potential
locations for placing base stations (BSs, or sinks), and another set of
potential locations for placing wireless relay nodes. There is a cost for
placing a BS and a cost for placing a relay. The problem we consider is to
select a set of BS locations, a set of relay locations, and an association of
sensor nodes with the selected BS locations, so that number of hops in the path
from each sensor to its BS is bounded by hmax, and among all such feasible
networks, the cost of the selected network is the minimum. The hop count bound
suffices to ensure a certain probability of the data being delivered to the BS
within a given maximum delay under a light traffic model. We observe that the
problem is NP-Hard, and is hard to even approximate within a constant factor.
For this problem, we propose a polynomial time approximation algorithm
(SmartSelect) based on a relay placement algorithm proposed in our earlier
work, along with a modification of the greedy algorithm for weighted set cover.
We have analyzed the worst case approximation guarantee for this algorithm. We
have also proposed a polynomial time heuristic to improve upon the solution
provided by SmartSelect. Our numerical results demonstrate that the algorithms
provide good quality solutions using very little computation time in various
randomly generated network scenarios
- …