107 research outputs found

    Approximate Bregman near neighbors in sublinear time: beyond the triangle inequality

    Get PDF
    pre-printBregman divergences are important distance measures that are used extensively in data-driven applications such as computer vision, text mining, and speech processing, and are a key focus of interest in machine learning. Answering nearest neighbor (NN) queries under these measures is very important in these applications and has been the subject of extensive study, but is problematic because these distance measures lack metric properties like symmetry and the triangle inequality. In this paper, we present the first provably approximate nearest-neighbor (ANN) algorithms. These process queries in O(logn) time for Bregman divergences in fixed dimensional spaces. We also obtain polylogn bounds for a more abstract class of distance measures (containing Bregman divergences) which satisfy certain structural properties . Both of these bounds apply to both the regular asymmetric Bregman divergences as well as their symmetrized versions. To do so, we develop two geometric properties vital to our analysis: a reverse triangle inequality (RTI) and a relaxed triangle inequality called m-defectiveness where m is a domain-dependent parameter. Bregman divergences satisfy the RTI but not m-defectiveness. However, we show that the square root of a Bregman divergence does satisfy m-defectiveness. This allows us to then utilize both properties in an efficient search data structure that follows the general two-stage paradigm of a ring-tree decomposition followed by a quad tree search used in previous near-neighbor algorithms for Euclidean space and spaces of bounded doubling dimension. Our first algorithm resolves a query for a d-dimensional (1+e)-ANN in O ( logne )O(d) time and O (nlogd-1 n) space and holds for generic m-defective distance measures satisfying a RTI. Our second algorithm is more specific in analysis to the Bregman divergences and uses a further structural constant, the maximum ratio of second derivatives over each dimension of our domain (c0). This allows us to locate a (1+e)-ANN in O(logn) time and O(n) space, where there is a further (c0)d factor in the big-Oh for the query time

    Doctor of Philosophy in Computing

    Get PDF
    dissertationIn the last two decades, an increasingly large amount of data has become available. Massive collections of videos, astronomical observations, social networking posts, network routing information, mobile location history and so forth are examples of real world data requiring processing for applications ranging from classi?cation to predictions. Computational resources grow at a far more constrained rate, and hence the need for ef?cient algorithms that scale well. Over the past twenty years high quality theoretical algorithms have been developed for two central problems: nearest neighbor search and dimensionality reduction over Euclidean distances in worst case distributions. These two tasks are interesting in their own right. Nearest neighbor corresponds to a database query lookup, while dimensionality reduction is a form of compression on massive data. Moreover, these are also subroutines in algorithms ranging from clustering to classi?cation. However, many highly relevant settings and distance measures have not received similar attention to that of worst case point sets in Euclidean space. The Bregman divergences include the information theoretic distances, such as entropy, of most relevance in many machine learning applications and yet prior to this dissertation lacked ef?cient dimensionality reductions, nearest neighbor algorithms, or even lower bounds on what could be possible. Furthermore, even in the Euclidean setting, theoretical algorithms do not leverage that almost all real world datasets have signi?cant low-dimensional substructure. In this dissertation, we explore different models and techniques for similarity search and dimensionality reduction. What upper bounds can be obtained for nearest neighbors for Bregman divergences? What upper bounds can be achieved for dimensionality reduction for information theoretic measures? Are these problems indeed intrinsically of harder computational complexity than in the Euclidean setting? Can we improve the state of the art nearest neighbor algorithms for real world datasets in Euclidean space? These are the questions we investigate in this dissertation, and that we shed some new insight on. In the ?rst part of our dissertation, we focus on Bregman divergences. We exhibit nearest neighbor algorithms, contingent on a distributional constraint on the datasets. We next show lower bounds suggesting that is in some sense inherent to the problem complexity. After this we explore dimensionality reduction techniques for the Jensen-Shannon and Hellinger distances, two popular information theoretic measures. In the second part, we show that even for the more well-studied Euclidean case, worst case nearest neighbor algorithms can be improved upon sharply for real world datasets with spectral structure

    Lower Bounds on Time-Space Trade-Offs for Approximate Near Neighbors

    Get PDF
    We show tight lower bounds for the entire trade-off between space and query time for the Approximate Near Neighbor search problem. Our lower bounds hold in a restricted model of computation, which captures all hashing-based approaches. In articular, our lower bound matches the upper bound recently shown in [Laarhoven 2015] for the random instance on a Euclidean sphere (which we show in fact extends to the entire space Rd\mathbb{R}^d using the techniques from [Andoni, Razenshteyn 2015]). We also show tight, unconditional cell-probe lower bounds for one and two probes, improving upon the best known bounds from [Panigrahy, Talwar, Wieder 2010]. In particular, this is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than for one probe. To show the result for two probes, we establish and exploit a connection to locally-decodable codes.Comment: 47 pages, 2 figures; v2: substantially revised introduction, lots of small corrections; subsumed by arXiv:1608.03580 [cs.DS] (along with arXiv:1511.07527 [cs.DS]

    Adaptive Sampling for Geometric Approximation

    Get PDF
    Geometric approximation of multi-dimensional data sets is an essential algorithmic component for applications in machine learning, computer graphics, and scientific computing. This dissertation promotes an algorithmic sampling methodology for a number of fundamental approximation problems in computational geometry. For each problem, the proposed sampling technique is carefully adapted to the geometry of the input data and the functions to be approximated. In particular, we study proximity queries in spaces of constant dimension and mesh generation in 3D. We start with polytope membership queries, where query points are tested for inclusion in a convex polytope. Trading-off accuracy for efficiency, we tolerate one-sided errors for points within an epsilon-expansion of the polytope. We propose a sampling strategy for the placement of covering ellipsoids sensitive to the local shape of the polytope. The key insight is to realize the samples as Delone sets in the intrinsic Hilbert metric. Using this intrinsic formulation, we considerably simplify state-of-the-art techniques yielding an intuitive and optimal data structure. Next, we study nearest-neighbor queries which retrieve the most similar data point to a given query point. To accommodate more general measures of similarity, we consider non-Euclidean distances including convex distance functions and Bregman divergences. Again, we tolerate multiplicative errors retrieving any point no farther than (1+epsilon) times the distance to the nearest neighbor. We propose a sampling strategy sensitive to the local distribution of points and the gradient of the distance functions. Combined with a careful regularization of the distance minimizers, we obtain a generalized data structure that essentially matches state-of-the-art results specific to the Euclidean distance. Finally, we investigate the generation of Voronoi meshes, where a given domain is decomposed into Voronoi cells as desired for a number of important solvers in computational fluid dynamics. The challenge is to arrange the cells near the boundary to yield an accurate surface approximation without sacrificing quality. We propose a sampling algorithm for the placement of seeds to induce a boundary-conforming Voronoi mesh of the correct topology, with a careful treatment of sharp and non-manifold features. The proposed algorithm achieves significant quality improvements over state-of-the-art polyhedral meshing based on clipped Voronoi cells

    Optimal Hashing-based Time-Space Trade-offs for Approximate Near Neighbors

    Full text link
    [See the paper for the full abstract.] We show tight upper and lower bounds for time-space trade-offs for the cc-Approximate Near Neighbor Search problem. For the dd-dimensional Euclidean space and nn-point datasets, we develop a data structure with space n1+ρu+o(1)+O(dn)n^{1 + \rho_u + o(1)} + O(dn) and query time nρq+o(1)+dno(1)n^{\rho_q + o(1)} + d n^{o(1)} for every ρu,ρq0\rho_u, \rho_q \geq 0 such that: \begin{equation} c^2 \sqrt{\rho_q} + (c^2 - 1) \sqrt{\rho_u} = \sqrt{2c^2 - 1}. \end{equation} This is the first data structure that achieves sublinear query time and near-linear space for every approximation factor c>1c > 1, improving upon [Kapralov, PODS 2015]. The data structure is a culmination of a long line of work on the problem for all space regimes; it builds on Spherical Locality-Sensitive Filtering [Becker, Ducas, Gama, Laarhoven, SODA 2016] and data-dependent hashing [Andoni, Indyk, Nguyen, Razenshteyn, SODA 2014] [Andoni, Razenshteyn, STOC 2015]. Our matching lower bounds are of two types: conditional and unconditional. First, we prove tightness of the whole above trade-off in a restricted model of computation, which captures all known hashing-based approaches. We then show unconditional cell-probe lower bounds for one and two probes that match the above trade-off for ρq=0\rho_q = 0, improving upon the best known lower bounds from [Panigrahy, Talwar, Wieder, FOCS 2010]. In particular, this is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than the one-probe bound. To show the result for two probes, we establish and exploit a connection to locally-decodable codes.Comment: 62 pages, 5 figures; a merger of arXiv:1511.07527 [cs.DS] and arXiv:1605.02701 [cs.DS], which subsumes both of the preprints. New version contains more elaborated proofs and fixed some typo
    corecore