71 research outputs found
Simpler and Better Algorithms for Minimum-Norm Load Balancing
Recently, Chakrabarty and Swamy (STOC 2019) introduced the minimum-norm load-balancing problem on unrelated machines, wherein we are given a set J of jobs that need to be scheduled on a set of m unrelated machines, and a monotone, symmetric norm; We seek an assignment sigma: J -> [m] that minimizes the norm of the resulting load vector load_{sigma} in R_+^m, where load_{sigma}(i) is the load on machine i under the assignment sigma. Besides capturing all l_p norms, symmetric norms also capture other norms of interest including top-l norms, and ordered norms. Chakrabarty and Swamy (STOC 2019) give a (38+epsilon)-approximation algorithm for this problem via a general framework they develop for minimum-norm optimization that proceeds by first carefully reducing this problem (in a series of steps) to a problem called min-max ordered load balancing, and then devising a so-called deterministic oblivious LP-rounding algorithm for ordered load balancing.
We give a direct, and simple 4+epsilon-approximation algorithm for the minimum-norm load balancing based on rounding a (near-optimal) solution to a novel convex-programming relaxation for the problem. Whereas the natural convex program encoding minimum-norm load balancing problem has a large non-constant integrality gap, we show that this issue can be remedied by including a key constraint that bounds the "norm of the job-cost vector." Our techniques also yield a (essentially) 4-approximation for: (a) multi-norm load balancing, wherein we are given multiple monotone symmetric norms, and we seek an assignment respecting a given budget for each norm; (b) the best simultaneous approximation factor achievable for all symmetric norms for a given instance
Welfare Maximization and Truthfulness in Mechanism Design with Ordinal Preferences
We study mechanism design problems in the {\em ordinal setting} wherein the
preferences of agents are described by orderings over outcomes, as opposed to
specific numerical values associated with them. This setting is relevant when
agents can compare outcomes, but aren't able to evaluate precise utilities for
them. Such a situation arises in diverse contexts including voting and matching
markets.
Our paper addresses two issues that arise in ordinal mechanism design. To
design social welfare maximizing mechanisms, one needs to be able to
quantitatively measure the welfare of an outcome which is not clear in the
ordinal setting. Second, since the impossibility results of Gibbard and
Satterthwaite~\cite{Gibbard73,Satterthwaite75} force one to move to randomized
mechanisms, one needs a more nuanced notion of truthfulness.
We propose {\em rank approximation} as a metric for measuring the quality of
an outcome, which allows us to evaluate mechanisms based on worst-case
performance, and {\em lex-truthfulness} as a notion of truthfulness for
randomized ordinal mechanisms. Lex-truthfulness is stronger than notions
studied in the literature, and yet flexible enough to admit a rich class of
mechanisms {\em circumventing classical impossibility results}. We demonstrate
the usefulness of the above notions by devising lex-truthful mechanisms
achieving good rank-approximation factors, both in the general ordinal setting,
as well as structured settings such as {\em (one-sided) matching markets}, and
its generalizations, {\em matroid} and {\em scheduling} markets.Comment: Some typos correcte
The Non-Uniform k-Center Problem
In this paper, we introduce and study the Non-Uniform k-Center problem
(NUkC). Given a finite metric space and a collection of balls of radii
, the NUkC problem is to find a placement of their
centers on the metric space and find the minimum dilation , such that
the union of balls of radius around the th center covers
all the points in . This problem naturally arises as a min-max vehicle
routing problem with fleets of different speeds.
The NUkC problem generalizes the classic -center problem when all the
radii are the same (which can be assumed to be after scaling). It also
generalizes the -center with outliers (kCwO) problem when there are
balls of radius and balls of radius . There are -approximation
and -approximation algorithms known for these problems respectively; the
former is best possible unless P=NP and the latter remains unimproved for 15
years.
We first observe that no -approximation is to the optimal dilation is
possible unless P=NP, implying that the NUkC problem is more non-trivial than
the above two problems. Our main algorithmic result is an
-bi-criteria approximation result: we give an -approximation
to the optimal dilation, however, we may open centers of each
radii. Our techniques also allow us to prove a simple (uni-criteria), optimal
-approximation to the kCwO problem improving upon the long-standing
-factor. Our main technical contribution is a connection between the NUkC
problem and the so-called firefighter problems on trees which have been studied
recently in the TCS community.Comment: Adjusted the figur
Integrality Gap of the Hypergraphic Relaxation of Steiner Trees: a short proof of a 1.55 upper bound
Recently Byrka, Grandoni, Rothvoss and Sanita (at STOC 2010) gave a
1.39-approximation for the Steiner tree problem, using a hypergraph-based
linear programming relaxation. They also upper-bounded its integrality gap by
1.55. We describe a shorter proof of the same integrality gap bound, by
applying some of their techniques to a randomized loss-contracting algorithm
Optimal Lower Bounds for Universal and Differentially Private Steiner Tree and TSP
Given a metric space on n points, an {\alpha}-approximate universal algorithm
for the Steiner tree problem outputs a distribution over rooted spanning trees
such that for any subset X of vertices containing the root, the expected cost
of the induced subtree is within an {\alpha} factor of the optimal Steiner tree
cost for X. An {\alpha}-approximate differentially private algorithm for the
Steiner tree problem takes as input a subset X of vertices, and outputs a tree
distribution that induces a solution within an {\alpha} factor of the optimal
as before, and satisfies the additional property that for any set X' that
differs in a single vertex from X, the tree distributions for X and X' are
"close" to each other. Universal and differentially private algorithms for TSP
are defined similarly. An {\alpha}-approximate universal algorithm for the
Steiner tree problem or TSP is also an {\alpha}-approximate differentially
private algorithm. It is known that both problems admit O(logn)-approximate
universal algorithms, and hence O(log n)-approximate differentially private
algorithms as well. We prove an {\Omega}(logn) lower bound on the approximation
ratio achievable for the universal Steiner tree problem and the universal TSP,
matching the known upper bounds. Our lower bound for the Steiner tree problem
holds even when the algorithm is allowed to output a more general solution of a
distribution on paths to the root.Comment: 14 page
Adaptive Boolean Monotonicity Testing in Total Influence Time
Testing monotonicity of a Boolean function f:{0,1}^n -> {0,1} is an important problem in the field of property testing. It has led to connections with many interesting combinatorial questions on the directed hypercube: routing, random walks, and new isoperimetric theorems. Denoting the proximity parameter by epsilon, the best tester is the non-adaptive O~(epsilon^{-2}sqrt{n}) tester of Khot-Minzer-Safra (FOCS 2015). A series of recent results by Belovs-Blais (STOC 2016) and Chen-Waingarten-Xie (STOC 2017) have led to Omega~(n^{1/3}) lower bounds for adaptive testers. Reducing this gap is a significant question, that touches on the role of adaptivity in monotonicity testing of Boolean functions.
We approach this question from the perspective of parametrized property testing, a concept recently introduced by Pallavoor-Raskhodnikova-Varma (ACM TOCT 2017), where one seeks to understand performance of testers with respect to parameters other than just the size. Our result is an adaptive monotonicity tester with one-sided error whose query complexity is O(epsilon^{-2}I(f)log^5 n), where I(f) is the total influence of the function. Therefore, adaptivity provably helps monotonicity testing for low influence functions
A Primal-Dual Analysis of Monotone Submodular Maximization
In this paper we design a new primal-dual algorithm for the classic discrete
optimization problem of maximizing a monotone submodular function subject to a
cardinality constraint achieving the optimal approximation of . This
problem and its special case, the maximum -coverage problem, have a wide
range of applications in various fields including operations research, machine
learning, and economics. While greedy algorithms have been known to achieve
this approximation factor, our algorithms also provide a dual certificate which
upper bounds the optimum value of any instance. This certificate may be used in
practice to certify much stronger guarantees than the worst-case
approximation factor
- …