65,346 research outputs found
Query-Efficient Algorithms to Find the Unique Nash Equilibrium in a Two-Player Zero-Sum Matrix Game
We study the query complexity of identifying Nash equilibria in two-player
zero-sum matrix games. Grigoriadis and Khachiyan (1995) showed that any
deterministic algorithm needs to query entries in worst case from
an input matrix in order to compute an -approximate
Nash equilibrium, where . Moreover, they designed a
randomized algorithm that queries
entries from the input matrix in expectation and returns an
-approximate Nash equilibrium when the entries of the matrix are
bounded between and . However, these two results do not completely
characterize the query complexity of finding an exact Nash equilibrium in
two-player zero-sum matrix games. In this work, we characterize the query
complexity of finding an exact Nash equilibrium for two-player zero-sum matrix
games that have a unique Nash equilibrium . We first show
that any randomized algorithm needs to query entries of the input
matrix in expectation in order to find the unique
Nash equilibrium where . We complement this lower
bound by presenting a simple randomized algorithm that, with probability
, returns the unique Nash equilibrium by querying at most entries of the input matrix
. In the special case when the unique Nash
Equilibrium is a pure-strategy Nash equilibrium (PSNE), we design a simple
deterministic algorithm that finds the PSNE by querying at most
entries of the input matrix.Comment: 17 page
Outlaw distributions and locally decodable codes
Locally decodable codes (LDCs) are error correcting codes that allow for
decoding of a single message bit using a small number of queries to a corrupted
encoding. Despite decades of study, the optimal trade-off between query
complexity and codeword length is far from understood. In this work, we give a
new characterization of LDCs using distributions over Boolean functions whose
expectation is hard to approximate (in~~norm) with a small number of
samples. We coin the term `outlaw distributions' for such distributions since
they `defy' the Law of Large Numbers. We show that the existence of outlaw
distributions over sufficiently `smooth' functions implies the existence of
constant query LDCs and vice versa. We give several candidates for outlaw
distributions over smooth functions coming from finite field incidence
geometry, additive combinatorics and from hypergraph (non)expanders.
We also prove a useful lemma showing that (smooth) LDCs which are only
required to work on average over a random message and a random message index
can be turned into true LDCs at the cost of only constant factors in the
parameters.Comment: A preliminary version of this paper appeared in the proceedings of
ITCS 201
Submodular Maximization with Nearly Optimal Approximation, Adaptivity and Query Complexity
Submodular optimization generalizes many classic problems in combinatorial
optimization and has recently found a wide range of applications in machine
learning (e.g., feature engineering and active learning). For many large-scale
optimization problems, we are often concerned with the adaptivity complexity of
an algorithm, which quantifies the number of sequential rounds where
polynomially-many independent function evaluations can be executed in parallel.
While low adaptivity is ideal, it is not sufficient for a distributed algorithm
to be efficient, since in many practical applications of submodular
optimization the number of function evaluations becomes prohibitively
expensive. Motivated by these applications, we study the adaptivity and query
complexity of adaptive submodular optimization.
Our main result is a distributed algorithm for maximizing a monotone
submodular function with cardinality constraint that achieves a
-approximation in expectation. This algorithm runs in
adaptive rounds and makes calls to the function evaluation
oracle in expectation. The approximation guarantee and query complexity are
optimal, and the adaptivity is nearly optimal. Moreover, the number of queries
is substantially less than in previous works. Last, we extend our results to
the submodular cover problem to demonstrate the generality of our algorithm and
techniques.Comment: 30 pages, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on
Discrete Algorithms (SODA 2019
Quantum query complexity of entropy estimation
Estimation of Shannon and R\'enyi entropies of unknown discrete distributions
is a fundamental problem in statistical property testing and an active research
topic in both theoretical computer science and information theory. Tight bounds
on the number of samples to estimate these entropies have been established in
the classical setting, while little is known about their quantum counterparts.
In this paper, we give the first quantum algorithms for estimating
-R\'enyi entropies (Shannon entropy being 1-Renyi entropy). In
particular, we demonstrate a quadratic quantum speedup for Shannon entropy
estimation and a generic quantum speedup for -R\'enyi entropy
estimation for all , including a tight bound for the
collision-entropy (2-R\'enyi entropy). We also provide quantum upper bounds for
extreme cases such as the Hartley entropy (i.e., the logarithm of the support
size of a distribution, corresponding to ) and the min-entropy case
(i.e., ), as well as the Kullback-Leibler divergence between
two distributions. Moreover, we complement our results with quantum lower
bounds on -R\'enyi entropy estimation for all .Comment: 43 pages, 1 figur
Memory vectors for similarity search in high-dimensional spaces
We study an indexing architecture to store and search in a database of
high-dimensional vectors from the perspective of statistical signal processing
and decision theory. This architecture is composed of several memory units,
each of which summarizes a fraction of the database by a single representative
vector. The potential similarity of the query to one of the vectors stored in
the memory unit is gauged by a simple correlation with the memory unit's
representative vector. This representative optimizes the test of the following
hypothesis: the query is independent from any vector in the memory unit vs. the
query is a simple perturbation of one of the stored vectors.
Compared to exhaustive search, our approach finds the most similar database
vectors significantly faster without a noticeable reduction in search quality.
Interestingly, the reduction of complexity is provably better in
high-dimensional spaces. We empirically demonstrate its practical interest in a
large-scale image search scenario with off-the-shelf state-of-the-art
descriptors.Comment: Accepted to IEEE Transactions on Big Dat
Weak Parity
We study the query complexity of Weak Parity: the problem of computing the
parity of an n-bit input string, where one only has to succeed on a 1/2+eps
fraction of input strings, but must do so with high probability on those inputs
where one does succeed. It is well-known that n randomized queries and n/2
quantum queries are needed to compute parity on all inputs. But surprisingly,
we give a randomized algorithm for Weak Parity that makes only
O(n/log^0.246(1/eps)) queries, as well as a quantum algorithm that makes only
O(n/sqrt(log(1/eps))) queries. We also prove a lower bound of
Omega(n/log(1/eps)) in both cases; and using extremal combinatorics, prove
lower bounds of Omega(log n) in the randomized case and Omega(sqrt(log n)) in
the quantum case for any eps>0. We show that improving our lower bounds is
intimately related to two longstanding open problems about Boolean functions:
the Sensitivity Conjecture, and the relationships between query complexity and
polynomial degree.Comment: 18 page
Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity
Submodular maximization is a general optimization problem with a wide range
of applications in machine learning (e.g., active learning, clustering, and
feature selection). In large-scale optimization, the parallel running time of
an algorithm is governed by its adaptivity, which measures the number of
sequential rounds needed if the algorithm can execute polynomially-many
independent oracle queries in parallel. While low adaptivity is ideal, it is
not sufficient for an algorithm to be efficient in practice---there are many
applications of distributed submodular optimization where the number of
function evaluations becomes prohibitively expensive. Motivated by these
applications, we study the adaptivity and query complexity of submodular
maximization. In this paper, we give the first constant-factor approximation
algorithm for maximizing a non-monotone submodular function subject to a
cardinality constraint that runs in adaptive rounds and makes
oracle queries in expectation. In our empirical study, we use
three real-world applications to compare our algorithm with several benchmarks
for non-monotone submodular maximization. The results demonstrate that our
algorithm finds competitive solutions using significantly fewer rounds and
queries.Comment: 12 pages, 8 figure
A Local Algorithm for the Sparse Spanning Graph Problem
Constructing a sparse spanning subgraph is a fundamental primitive in graph
theory. In this paper, we study this problem in the Centralized Local model,
where the goal is to decide whether an edge is part of the spanning subgraph by
examining only a small part of the input; yet, answers must be globally
consistent and independent of prior queries.
Unfortunately, maximally sparse spanning subgraphs, i.e., spanning trees,
cannot be constructed efficiently in this model. Therefore, we settle for a
spanning subgraph containing at most edges (where is the
number of vertices and is a given approximation/sparsity
parameter). We achieve query complexity of
, (-notation hides
polylogarithmic factors in ). where is the maximum degree of the
input graph. Our algorithm is the first to do so on arbitrary bounded degree
graphs. Moreover, we achieve the additional property that our algorithm outputs
a spanner, i.e., distances are approximately preserved. With high probability,
for each deleted edge there is a path of
hops in the output that connects its endpoints
- âŠ