30 research outputs found
Sparse Matrix Factorization
We investigate the problem of factorizing a matrix into several sparse
matrices and propose an algorithm for this under randomness and sparsity
assumptions. This problem can be viewed as a simplification of the deep
learning problem where finding a factorization corresponds to finding edges in
different layers and values of hidden units. We prove that under certain
assumptions for a sparse linear deep network with nodes in each layer, our
algorithm is able to recover the structure of the network and values of top
layer hidden units for depths up to . We further discuss the
relation among sparse matrix factorization, deep learning, sparse recovery and
dictionary learning.Comment: 20 page
On Symmetric and Asymmetric LSHs for Inner Product Search
We consider the problem of designing locality sensitive hashes (LSH) for
inner product similarity, and of the power of asymmetric hashes in this
context. Shrivastava and Li argue that there is no symmetric LSH for the
problem and propose an asymmetric LSH based on different mappings for query and
database points. However, we show there does exist a simple symmetric LSH that
enjoys stronger guarantees and better empirical performance than the asymmetric
LSH they suggest. We also show a variant of the settings where asymmetry is
in-fact needed, but there a different asymmetric LSH is required.Comment: 11 pages, 3 figures, In Proceedings of The 32nd International
Conference on Machine Learning (ICML