65,331 research outputs found

    Query-Efficient Algorithms to Find the Unique Nash Equilibrium in a Two-Player Zero-Sum Matrix Game

    Full text link
    We study the query complexity of identifying Nash equilibria in two-player zero-sum matrix games. Grigoriadis and Khachiyan (1995) showed that any deterministic algorithm needs to query Ω(n2)\Omega(n^2) entries in worst case from an n×nn\times n input matrix in order to compute an Δ\varepsilon-approximate Nash equilibrium, where Δ<12\varepsilon<\frac{1}{2}. Moreover, they designed a randomized algorithm that queries O(nlog⁥nΔ2)\mathcal O(\frac{n\log n}{\varepsilon^2}) entries from the input matrix in expectation and returns an Δ\varepsilon-approximate Nash equilibrium when the entries of the matrix are bounded between −1-1 and 11. However, these two results do not completely characterize the query complexity of finding an exact Nash equilibrium in two-player zero-sum matrix games. In this work, we characterize the query complexity of finding an exact Nash equilibrium for two-player zero-sum matrix games that have a unique Nash equilibrium (x⋆,y⋆)(x_\star,y_\star). We first show that any randomized algorithm needs to query Ω(nk)\Omega(nk) entries of the input matrix A∈Rn×nA\in\mathbb{R}^{n\times n} in expectation in order to find the unique Nash equilibrium where k=∣supp(x⋆)∣k=|\text{supp}(x_\star)|. We complement this lower bound by presenting a simple randomized algorithm that, with probability 1−ή1-\delta, returns the unique Nash equilibrium by querying at most O(nk4⋅polylog(nÎŽ))\mathcal O(nk^4\cdot \text{polylog}(\frac{n}{\delta})) entries of the input matrix A∈Rn×nA\in\mathbb{R}^{n\times n}. In the special case when the unique Nash Equilibrium is a pure-strategy Nash equilibrium (PSNE), we design a simple deterministic algorithm that finds the PSNE by querying at most O(n)\mathcal O(n) entries of the input matrix.Comment: 17 page

    Outlaw distributions and locally decodable codes

    Get PDF
    Locally decodable codes (LDCs) are error correcting codes that allow for decoding of a single message bit using a small number of queries to a corrupted encoding. Despite decades of study, the optimal trade-off between query complexity and codeword length is far from understood. In this work, we give a new characterization of LDCs using distributions over Boolean functions whose expectation is hard to approximate (in~L∞L_\infty~norm) with a small number of samples. We coin the term `outlaw distributions' for such distributions since they `defy' the Law of Large Numbers. We show that the existence of outlaw distributions over sufficiently `smooth' functions implies the existence of constant query LDCs and vice versa. We give several candidates for outlaw distributions over smooth functions coming from finite field incidence geometry, additive combinatorics and from hypergraph (non)expanders. We also prove a useful lemma showing that (smooth) LDCs which are only required to work on average over a random message and a random message index can be turned into true LDCs at the cost of only constant factors in the parameters.Comment: A preliminary version of this paper appeared in the proceedings of ITCS 201

    Submodular Maximization with Nearly Optimal Approximation, Adaptivity and Query Complexity

    Full text link
    Submodular optimization generalizes many classic problems in combinatorial optimization and has recently found a wide range of applications in machine learning (e.g., feature engineering and active learning). For many large-scale optimization problems, we are often concerned with the adaptivity complexity of an algorithm, which quantifies the number of sequential rounds where polynomially-many independent function evaluations can be executed in parallel. While low adaptivity is ideal, it is not sufficient for a distributed algorithm to be efficient, since in many practical applications of submodular optimization the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of adaptive submodular optimization. Our main result is a distributed algorithm for maximizing a monotone submodular function with cardinality constraint kk that achieves a (1−1/e−Δ)(1-1/e-\varepsilon)-approximation in expectation. This algorithm runs in O(log⁥(n))O(\log(n)) adaptive rounds and makes O(n)O(n) calls to the function evaluation oracle in expectation. The approximation guarantee and query complexity are optimal, and the adaptivity is nearly optimal. Moreover, the number of queries is substantially less than in previous works. Last, we extend our results to the submodular cover problem to demonstrate the generality of our algorithm and techniques.Comment: 30 pages, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2019

    Quantum query complexity of entropy estimation

    Full text link
    Estimation of Shannon and R\'enyi entropies of unknown discrete distributions is a fundamental problem in statistical property testing and an active research topic in both theoretical computer science and information theory. Tight bounds on the number of samples to estimate these entropies have been established in the classical setting, while little is known about their quantum counterparts. In this paper, we give the first quantum algorithms for estimating α\alpha-R\'enyi entropies (Shannon entropy being 1-Renyi entropy). In particular, we demonstrate a quadratic quantum speedup for Shannon entropy estimation and a generic quantum speedup for α\alpha-R\'enyi entropy estimation for all α≄0\alpha\geq 0, including a tight bound for the collision-entropy (2-R\'enyi entropy). We also provide quantum upper bounds for extreme cases such as the Hartley entropy (i.e., the logarithm of the support size of a distribution, corresponding to α=0\alpha=0) and the min-entropy case (i.e., α=+∞\alpha=+\infty), as well as the Kullback-Leibler divergence between two distributions. Moreover, we complement our results with quantum lower bounds on α\alpha-R\'enyi entropy estimation for all α≄0\alpha\geq 0.Comment: 43 pages, 1 figur

    Memory vectors for similarity search in high-dimensional spaces

    Get PDF
    We study an indexing architecture to store and search in a database of high-dimensional vectors from the perspective of statistical signal processing and decision theory. This architecture is composed of several memory units, each of which summarizes a fraction of the database by a single representative vector. The potential similarity of the query to one of the vectors stored in the memory unit is gauged by a simple correlation with the memory unit's representative vector. This representative optimizes the test of the following hypothesis: the query is independent from any vector in the memory unit vs. the query is a simple perturbation of one of the stored vectors. Compared to exhaustive search, our approach finds the most similar database vectors significantly faster without a noticeable reduction in search quality. Interestingly, the reduction of complexity is provably better in high-dimensional spaces. We empirically demonstrate its practical interest in a large-scale image search scenario with off-the-shelf state-of-the-art descriptors.Comment: Accepted to IEEE Transactions on Big Dat

    Weak Parity

    Get PDF
    We study the query complexity of Weak Parity: the problem of computing the parity of an n-bit input string, where one only has to succeed on a 1/2+eps fraction of input strings, but must do so with high probability on those inputs where one does succeed. It is well-known that n randomized queries and n/2 quantum queries are needed to compute parity on all inputs. But surprisingly, we give a randomized algorithm for Weak Parity that makes only O(n/log^0.246(1/eps)) queries, as well as a quantum algorithm that makes only O(n/sqrt(log(1/eps))) queries. We also prove a lower bound of Omega(n/log(1/eps)) in both cases; and using extremal combinatorics, prove lower bounds of Omega(log n) in the randomized case and Omega(sqrt(log n)) in the quantum case for any eps>0. We show that improving our lower bounds is intimately related to two longstanding open problems about Boolean functions: the Sensitivity Conjecture, and the relationships between query complexity and polynomial degree.Comment: 18 page

    Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity

    Full text link
    Submodular maximization is a general optimization problem with a wide range of applications in machine learning (e.g., active learning, clustering, and feature selection). In large-scale optimization, the parallel running time of an algorithm is governed by its adaptivity, which measures the number of sequential rounds needed if the algorithm can execute polynomially-many independent oracle queries in parallel. While low adaptivity is ideal, it is not sufficient for an algorithm to be efficient in practice---there are many applications of distributed submodular optimization where the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of submodular maximization. In this paper, we give the first constant-factor approximation algorithm for maximizing a non-monotone submodular function subject to a cardinality constraint kk that runs in O(log⁥(n))O(\log(n)) adaptive rounds and makes O(nlog⁥(k))O(n \log(k)) oracle queries in expectation. In our empirical study, we use three real-world applications to compare our algorithm with several benchmarks for non-monotone submodular maximization. The results demonstrate that our algorithm finds competitive solutions using significantly fewer rounds and queries.Comment: 12 pages, 8 figure

    A Local Algorithm for the Sparse Spanning Graph Problem

    Get PDF
    Constructing a sparse spanning subgraph is a fundamental primitive in graph theory. In this paper, we study this problem in the Centralized Local model, where the goal is to decide whether an edge is part of the spanning subgraph by examining only a small part of the input; yet, answers must be globally consistent and independent of prior queries. Unfortunately, maximally sparse spanning subgraphs, i.e., spanning trees, cannot be constructed efficiently in this model. Therefore, we settle for a spanning subgraph containing at most (1+Δ)n(1+\varepsilon)n edges (where nn is the number of vertices and Δ\varepsilon is a given approximation/sparsity parameter). We achieve query complexity of O~(poly(Δ/Δ)n2/3)\tilde{O}(poly(\Delta/\varepsilon)n^{2/3}), (O~\tilde{O}-notation hides polylogarithmic factors in nn). where Δ\Delta is the maximum degree of the input graph. Our algorithm is the first to do so on arbitrary bounded degree graphs. Moreover, we achieve the additional property that our algorithm outputs a spanner, i.e., distances are approximately preserved. With high probability, for each deleted edge there is a path of O(poly(Δ/Δ)log⁥2n)O(poly(\Delta/\varepsilon)\log^2 n) hops in the output that connects its endpoints
    • 

    corecore