184 research outputs found

    Learning overcomplete dictionaries based on parallel atom-updating

    No full text
    International audienceIn this paper we propose a fast and efficient algorithm for learning overcomplete dictionaries. The proposed algorithm is indeed an alternative to the well-known K-Singular Value Decomposition (K-SVD) algorithm. The main drawback of K-SVD is its high computational load especially in high- dimensional problems. This is due to the fact that in the dictionary update stage of this algorithm an SVD is performed to update each column of the dictionary. Our proposed algorithm avoids performing SVD and instead uses a special form of alternating minimization. In this way, as our simulations on both synthetic and real data show, our algorithm outperforms K-SVD in both computational load and the quality of the results

    Learning Overcomplete Dictionaries Based on Atom-by-Atom Updating

    No full text
    International audienceA dictionary learning algorithm learns a set of atoms from some training signals in such a way that each signal can be approximated as a linear combination of only a few atoms. Most dictionary learning algorithms use a two-stage iterative procedure. The first stage is to spars ely approximate the training signals over the current dictionary. The second stage is the update of the dictionary. In this paper we develop some atom-by-atom dictionary learning algorithms, which update the atoms sequentially. Specifically, we propose an efficient alternative to the well-known K-SVD algorithm, and show by various experiments that the proposed algorithm is much faster than K-SVD while its results are better. Moreover, we propose a novel algorithm that instead of alternating between the two dictionary learning stages, performs only the second stage. While in K-SVD each atom is updated along with the nonzero entries of its associated row vector in the coefficient matrix (which we name it its profile), in the new algorithm, each atom is updated along with the whole entries of its profile. As a result, contrary to K-SVD, the support of each profile can be changed while updating the dictionary. To further accelerate the convergence of this algorithm and to have a control on the cardinality of the representations, we then propose its two-stage counterpart by adding the sparse approximation stage. Experimental results on recovery of a known synthetic dictionary and dictionary learning for a class of auto-regressive signals demonstrate the promising performance of the proposed algorithms

    Generalization of the K-SVD algorithm for minimization of ß-divergence

    Full text link
    [EN] In this paper, we propose, describe, and test a modification of the K-SVD algorithm. Given a set of training data, the proposed algorithm computes an overcomplete dictionary by minimizing the ß-divergence () between the data and its representation as linear combinations of atoms of the dictionary, under strict sparsity restrictions. For the special case , the proposed algorithm minimizes the Frobenius norm and, therefore, for the proposed algorithm is equivalent to the original K-SVD algorithm. We describe the modifications needed and discuss the possible shortcomings of the new algorithm. The algorithm is tested with random matrices and with an example based on speech separation.This work has been partially supported by the EU together with the Spanish Government through TEC2015-67387-C4-1-R (MINECO/FEDER) and by Programa de FPU del Ministerio de Educacion, Cultura y Deporte FPU13/03828 (Spain).García Mollá, VM.; San Juan-Sebastian, P.; Virtanen, T.; Vidal Maciá, AM.; Alonso-Jordá, P. (2019). Generalization of the K-SVD algorithm for minimization of ß-divergence. Digital Signal Processing. 92:47-53. https://doi.org/10.1016/j.dsp.2019.05.001S47539

    A fast patch-dictionary method for whole image recovery

    Full text link
    Various algorithms have been proposed for dictionary learning. Among those for image processing, many use image patches to form dictionaries. This paper focuses on whole-image recovery from corrupted linear measurements. We address the open issue of representing an image by overlapping patches: the overlapping leads to an excessive number of dictionary coefficients to determine. With very few exceptions, this issue has limited the applications of image-patch methods to the local kind of tasks such as denoising, inpainting, cartoon-texture decomposition, super-resolution, and image deblurring, for which one can process a few patches at a time. Our focus is global imaging tasks such as compressive sensing and medical image recovery, where the whole image is encoded together, making it either impossible or very ineffective to update a few patches at a time. Our strategy is to divide the sparse recovery into multiple subproblems, each of which handles a subset of non-overlapping patches, and then the results of the subproblems are averaged to yield the final recovery. This simple strategy is surprisingly effective in terms of both quality and speed. In addition, we accelerate computation of the learned dictionary by applying a recent block proximal-gradient method, which not only has a lower per-iteration complexity but also takes fewer iterations to converge, compared to the current state-of-the-art. We also establish that our algorithm globally converges to a stationary point. Numerical results on synthetic data demonstrate that our algorithm can recover a more faithful dictionary than two state-of-the-art methods. Combining our whole-image recovery and dictionary-learning methods, we numerically simulate image inpainting, compressive sensing recovery, and deblurring. Our recovery is more faithful than those of a total variation method and a method based on overlapping patches
    corecore