14 research outputs found

    Near Neighbor Search via Efficient Average Distortion Embeddings

    Get PDF
    A recent series of papers by Andoni, Naor, Nikolov, Razenshteyn, and Waingarten (STOC 2018, FOCS 2018) has given approximate near neighbour search (NNS) data structures for a wide class of distance metrics, including all norms. In particular, these data structures achieve approximation on the order of p for ?_p^d norms with space complexity nearly linear in the dataset size n and polynomial in the dimension d, and query time sub-linear in n and polynomial in d. The main shortcoming is the exponential in d pre-processing time required for their construction. In this paper, we describe a more direct framework for constructing NNS data structures for general norms. More specifically, we show via an algorithmic reduction that an efficient NNS data structure for a metric ? is implied by an efficient average distortion embedding of ? into ?? or the Euclidean space. In particular, the resulting data structures require only polynomial pre-processing time, as long as the embedding can be computed in polynomial time. As a concrete instantiation of this framework, we give an NNS data structure for ?_p with efficient pre-processing that matches the approximation factor, space and query complexity of the aforementioned data structure of Andoni et al. On the way, we resolve a question of Naor (Analysis and Geometry in Metric Spaces, 2014) and provide an explicit, efficiently computable embedding of ?_p, for p ? 1, into ?? with average distortion on the order of p. Furthermore, we also give data structures for Schatten-p spaces with improved space and query complexity, albeit still requiring exponential pre-processing when p ? 2. We expect our approach to pave the way for constructing efficient NNS data structures for all norms

    Accelerating Frank-Wolfe Algorithm using Low-Dimensional and Adaptive Data Structures

    Full text link
    In this paper, we study the problem of speeding up a type of optimization algorithms called Frank-Wolfe, a conditional gradient method. We develop and employ two novel inner product search data structures, improving the prior fastest algorithm in [Shrivastava, Song and Xu, NeurIPS 2021]. * The first data structure uses low-dimensional random projection to reduce the problem to a lower dimension, then uses efficient inner product data structure. It has preprocessing time O~(ndω1+dn1+o(1))\tilde O(nd^{\omega-1}+dn^{1+o(1)}) and per iteration cost O~(d+nρ)\tilde O(d+n^\rho) for small constant ρ\rho. * The second data structure leverages the recent development in adaptive inner product search data structure that can output estimations to all inner products. It has preprocessing time O~(nd)\tilde O(nd) and per iteration cost O~(d+n)\tilde O(d+n). The first algorithm improves the state-of-the-art (with preprocessing time O~(d2n1+o(1))\tilde O(d^2n^{1+o(1)}) and per iteration cost O~(dnρ)\tilde O(dn^\rho)) in all cases, while the second one provides an even faster preprocessing time and is suitable when the number of iterations is small

    New Directions in Geometric and Applied Knot Theory

    Get PDF
    The aim of this book is to present recent results in both theoretical and applied knot theory—which are at the same time stimulating for leading researchers in the field as well as accessible to non-experts. The book comprises recent research results while covering a wide range of different sub-disciplines, such as the young field of geometric knot theory, combinatorial knot theory, as well as applications in microbiology and theoretical physics
    corecore