27 research outputs found

    Lower Bounds on Time-Space Trade-Offs for Approximate Near Neighbors

    Get PDF
    We show tight lower bounds for the entire trade-off between space and query time for the Approximate Near Neighbor search problem. Our lower bounds hold in a restricted model of computation, which captures all hashing-based approaches. In articular, our lower bound matches the upper bound recently shown in [Laarhoven 2015] for the random instance on a Euclidean sphere (which we show in fact extends to the entire space Rd\mathbb{R}^d using the techniques from [Andoni, Razenshteyn 2015]). We also show tight, unconditional cell-probe lower bounds for one and two probes, improving upon the best known bounds from [Panigrahy, Talwar, Wieder 2010]. In particular, this is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than for one probe. To show the result for two probes, we establish and exploit a connection to locally-decodable codes.Comment: 47 pages, 2 figures; v2: substantially revised introduction, lots of small corrections; subsumed by arXiv:1608.03580 [cs.DS] (along with arXiv:1511.07527 [cs.DS]

    Lower Bounds for Tolerant Junta and Unateness Testing via Rejection Sampling of Graphs

    Get PDF
    We introduce a new model for testing graph properties which we call the rejection sampling model. We show that testing bipartiteness of n-nodes graphs using rejection sampling queries requires complexity Omega~(n^2). Via reductions from the rejection sampling model, we give three new lower bounds for tolerant testing of Boolean functions of the form f : {0,1}^n -> {0,1}: - Tolerant k-junta testing with non-adaptive queries requires Omega~(k^2) queries. - Tolerant unateness testing requires Omega~(n) queries. - Tolerant unateness testing with non-adaptive queries requires Omega~(n^{3/2}) queries. Given the O~(k^{3/2})-query non-adaptive junta tester of Blais [Eric Blais, 2008], we conclude that non-adaptive tolerant junta testing requires more queries than non-tolerant junta testing. In addition, given the O~(n^{3/4})-query unateness tester of Chen, Waingarten, and Xie [Xi Chen et al., 2017] and the O~(n)-query non-adaptive unateness tester of Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri [Roksana Baleshzar et al., 2017], we conclude that tolerant unateness testing requires more queries than non-tolerant unateness testing, in both adaptive and non-adaptive settings. These lower bounds provide the first separation between tolerant and non-tolerant testing for a natural property of Boolean functions

    Four is the Cosmic Number

    Get PDF
    Thirteen, eight, five, four, four is the cosmic number

    Playing Dominoes Is Hard, Except by Yourself

    Get PDF
    Dominoes is a popular and well-known game possibly dating back three millennia. Players are given a set of domino tiles, each with two labeled square faces, and take turns connecting them into a growing chain of dominoes by matching identical faces. We show that single-player dominoes is in P, while multiplayer dominoes is hard: when players cooperate, the game is NP-complete, and when players compete, the game is PSPACE-complete. In addition, we show that these hardness results easily extend to games involving team play

    A Quasi-Monte Carlo Data Structure for Smooth Kernel Evaluations

    Full text link
    In the kernel density estimation (KDE) problem one is given a kernel K(x,y)K(x, y) and a dataset PP of points in a Euclidean space, and must prepare a data structure that can quickly answer density queries: given a point qq, output a (1+ϵ)(1+\epsilon)-approximation to μ:=1PpPK(p,q)\mu:=\frac1{|P|}\sum_{p\in P} K(p, q). The classical approach to KDE is the celebrated fast multipole method of [Greengard and Rokhlin]. The fast multipole method combines a basic space partitioning approach with a multidimensional Taylor expansion, which yields a logd(n/ϵ)\approx \log^d (n/\epsilon) query time (exponential in the dimension dd). A recent line of work initiated by [Charikar and Siminelakis] achieved polynomial dependence on dd via a combination of random sampling and randomized space partitioning, with [Backurs et al.] giving an efficient data structure with query time polylog(1/μ)/ϵ2\approx \mathrm{poly}{\log(1/\mu)}/\epsilon^2 for smooth kernels. Quadratic dependence on ϵ\epsilon, inherent to the sampling methods, is prohibitively expensive for small ϵ\epsilon. This issue is addressed by quasi-Monte Carlo methods in numerical analysis. The high level idea in quasi-Monte Carlo methods is to replace random sampling with a discrepancy based approach -- an idea recently applied to coresets for KDE by [Phillips and Tai]. The work of Phillips and Tai gives a space efficient data structure with query complexity 1/(ϵμ)\approx 1/(\epsilon \mu). This is polynomially better in 1/ϵ1/\epsilon, but exponentially worse in 1/μ1/\mu. We achieve the best of both: a data structure with polylog(1/μ)/ϵ\approx \mathrm{poly}{\log(1/\mu)}/\epsilon query time for smooth kernel KDE. Our main insight is a new way to combine discrepancy theory with randomized space partitioning inspired by, but significantly more efficient than, that of the fast multipole methods. We hope that our techniques will find further applications to linear algebra for kernel matrices
    corecore