11,717 research outputs found

    Hashing for Similarity Search: A Survey

    Full text link
    Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space

    Spectral Ewald Acceleration of Stokesian Dynamics for polydisperse suspensions

    Get PDF
    In this work we develop the Spectral Ewald Accelerated Stokesian Dynamics (SEASD), a novel computational method for dynamic simulations of polydisperse colloidal suspensions with full hydrodynamic interactions. SEASD is based on the framework of Stokesian Dynamics (SD) with extension to compressible solvents, and uses the Spectral Ewald (SE) method [Lindbo & Tornberg, J. Comput. Phys. 229 (2010) 8994] for the wave-space mobility computation. To meet the performance requirement of dynamic simulations, we use Graphic Processing Units (GPU) to evaluate the suspension mobility, and achieve an order of magnitude speedup compared to a CPU implementation. For further speedup, we develop a novel far-field block-diagonal preconditioner to reduce the far-field evaluations in the iterative solver, and SEASD-nf, a polydisperse extension of the mean-field Brownian approximation of Banchio & Brady [J. Chem. Phys. 118 (2003) 10323]. We extensively discuss implementation and parameter selection strategies in SEASD, and demonstrate the spectral accuracy in the mobility evaluation and the overall O(NlogN)\mathcal{O}(N\log N) computation scaling. We present three computational examples to further validate SEASD and SEASD-nf in monodisperse and bidisperse suspensions: the short-time transport properties, the equilibrium osmotic pressure and viscoelastic moduli, and the steady shear Brownian rheology. Our validation results show that the agreement between SEASD and SEASD-nf is satisfactory over a wide range of parameters, and also provide significant insight into the dynamics of polydisperse colloidal suspensions.Comment: 39 pages, 21 figure

    Khovanov-Rozansky homology via a canopolis formalism

    Full text link
    In this paper, we describe a canopolis (i.e. categorified planar algebra) formalism for Khovanov and Rozansky's link homology theory. We show how this allows us to organize simplifications in the matrix factorizations appearing in their theory. In particular, it will put the equivalence of the original definition of Khovanov-Rozansky homology and the definition using Soergel bimodules in a more general context, allow us to give a new proof of the invariance of triply graded homology and give new analysis of the behavior of triply graded homology under the Reidemeister IIb move.Comment: 24 pages, 7 figures. v3: edited introduction and fixed diagram 1, plus minor change

    Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

    Full text link
    The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent co-ordination, estimation in sensor networks, and large-scale optimization in machine learning. We develop and analyze distributed algorithms based on dual averaging of subgradients, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our method of analysis allows for a clear separation between the convergence of the optimization algorithm itself and the effects of communication constraints arising from the network structure. In particular, we show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks. Our approach includes both the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.Comment: 40 pages, 4 figure
    corecore