54 research outputs found
A fast approach for overcomplete sparse decomposition based on smoothed L0 norm
In this paper, a fast algorithm for overcomplete sparse decomposition, called
SL0, is proposed. The algorithm is essentially a method for obtaining sparse
solutions of underdetermined systems of linear equations, and its applications
include underdetermined Sparse Component Analysis (SCA), atomic decomposition
on overcomplete dictionaries, compressed sensing, and decoding real field
codes. Contrary to previous methods, which usually solve this problem by
minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm
tries to directly minimize the L0 norm. It is experimentally shown that the
proposed algorithm is about two to three orders of magnitude faster than the
state-of-the-art interior-point LP solvers, while providing the same (or
better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes,
see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing
erroneousl
Learning Overcomplete Dictionaries Based on Atom-by-Atom Updating
International audienceA dictionary learning algorithm learns a set of atoms from some training signals in such a way that each signal can be approximated as a linear combination of only a few atoms. Most dictionary learning algorithms use a two-stage iterative procedure. The first stage is to spars ely approximate the training signals over the current dictionary. The second stage is the update of the dictionary. In this paper we develop some atom-by-atom dictionary learning algorithms, which update the atoms sequentially. Specifically, we propose an efficient alternative to the well-known K-SVD algorithm, and show by various experiments that the proposed algorithm is much faster than K-SVD while its results are better. Moreover, we propose a novel algorithm that instead of alternating between the two dictionary learning stages, performs only the second stage. While in K-SVD each atom is updated along with the nonzero entries of its associated row vector in the coefficient matrix (which we name it its profile), in the new algorithm, each atom is updated along with the whole entries of its profile. As a result, contrary to K-SVD, the support of each profile can be changed while updating the dictionary. To further accelerate the convergence of this algorithm and to have a control on the cardinality of the representations, we then propose its two-stage counterpart by adding the sparse approximation stage. Experimental results on recovery of a known synthetic dictionary and dictionary learning for a class of auto-regressive signals demonstrate the promising performance of the proposed algorithms
Privacy-Preserving Identification via Layered Sparse Code Design: Distributed Servers and Multiple Access Authorization
We propose a new computationally efficient privacy-preserving identification
framework based on layered sparse coding. The key idea of the proposed
framework is a sparsifying transform learning with ambiguization, which
consists of a trained linear map, a component-wise nonlinearity and a privacy
amplification. We introduce a practical identification framework, which
consists of two phases: public and private identification. The public untrusted
server provides the fast search service based on the sparse privacy protected
codebook stored at its side. The private trusted server or the local client
application performs the refined accurate similarity search using the results
of the public search and the layered sparse codebooks stored at its side. The
private search is performed in the decoded domain and also the accuracy of
private search is chosen based on the authorization level of the client. The
efficiency of the proposed method is in computational complexity of encoding,
decoding, "encryption" (ambiguization) and "decryption" (purification) as well
as storage complexity of the codebooks.Comment: EUSIPCO 201
- …