4 research outputs found

    Analysis of SparseHash: an efficient embedding of set-similarity via sparse projections

    Get PDF
    Embeddings provide compact representations of signals in order to perform efficient inference in a wide variety of tasks. In particular, random projections are common tools to construct Euclidean distance-preserving embeddings, while hashing techniques are extensively used to embed set-similarity metrics, such as the Jaccard coefficient. In this letter, we theoretically prove that a class of random projections based on sparse matrices, called SparseHash, can preserve the Jaccard coefficient between the supports of sparse signals, which can be used to estimate set similarities. Moreover, besides the analysis, we provide an efficient implementation and we test the performance in several numerical experiments, both on synthetic and real datasets.Comment: 25 pages, 6 figure

    Sparsity estimation from compressive projections via sparse random matrices

    Get PDF
    Abstract The aim of this paper is to develop strategies to estimate the sparsity degree of a signal from compressive projections, without the burden of recovery. We consider both the noise-free and the noisy settings, and we show how to extend the proposed framework to the case of non-exactly sparse signals. The proposed method employs γ-sparsified random matrices and is based on a maximum likelihood (ML) approach, exploiting the property that the acquired measurements are distributed according to a mixture model whose parameters depend on the signal sparsity. In the presence of noise, given the complexity of ML estimation, the probability model is approximated with a two-component Gaussian mixture (2-GMM), which can be easily learned via expectation-maximization. Besides the design of the method, this paper makes two novel contributions. First, in the absence of noise, sufficient conditions on the number of measurements are provided for almost sure exact estimation in different regimes of behavior, defined by the scaling of the measurements sparsity γ and the signal sparsity. In the presence of noise, our second contribution is to prove that the 2-GMM approximation is accurate in the large system limit for a proper choice of γ parameter. Simulations validate our predictions and show that the proposed algorithms outperform the state-of-the-art methods for sparsity estimation. Finally, the estimation strategy is applied to non-exactly sparse signals. The results are very encouraging, suggesting further extension to more general frameworks
    corecore