52 research outputs found

    Block Orthonormal Overcomplete Dictionary Learning

    Get PDF
    In the field of sparse representations, the overcomplete dictionary learning problem is of crucial importance and has a growing application pool where it is used. In this paper we present an iterative dictionary learning algorithm based on the singular value decomposition that efficiently construct unions of orthonormal bases. The important innovation described in this paper, that affects positively the running time of the learning procedures, is the way in which the sparse representations are computed - data are reconstructed in a single orthonormal base, avoiding slow sparse approximation algorithms - how the bases in the union are used and updated individually and how the union itself is expanded by looking at the worst reconstructed data items. The numerical experiments show conclusively the speedup induced by our method when compared to previous works, for the same target representation error

    Shift & 2D Rotation Invariant Sparse Coding for Multivariate Signals

    Get PDF
    International audienceClassical dictionary learning algorithms (DLA) allow unicomponent signals to be processed. Due to our interest in two-dimensional (2D) motion signals, we wanted to mix the two components to provide rotation invariance. So, multicomponent frameworks are examined here. In contrast to the well-known multichannel framework, a multivariate framework is first introduced as a tool to easily solve our problem and to preserve the data structure. Within this multivariate framework, we then present sparse coding methods: multivariate orthogonal matching pursuit (M-OMP), which provides sparse approximation for multivariate signals, and multivariate DLA (M-DLA), which empirically learns the characteristic patterns (or features) that are associated to a multivariate signals set, and combines shift-invariance and online learning. Once the multivariate dictionary is learned, any signal of this considered set can be approximated sparsely. This multivariate framework is introduced to simply present the 2D rotation invariant (2DRI) case. By studying 2D motions that are acquired in bivariate real signals, we want the decompositions to be independent of the orientation of the movement execution in the 2D space. The methods are thus specified for the 2DRI case to be robust to any rotation: 2DRI-OMP and 2DRI-DLA. Shift and rotation invariant cases induce a compact learned dictionary and provide robust decomposition. As validation, our methods are applied to 2D handwritten data to extract the elementary features of this signals set, and to provide rotation invariant decomposition

    Simultaneous Codeword Optimization (SimCO) for Dictionary Update and Learning

    Get PDF
    We consider the data-driven dictionary learning problem. The goal is to seek an over-complete dictionary from which every training signal can be best approximated by a linear combination of only a few codewords. This task is often achieved by iteratively executing two operations: sparse coding and dictionary update. In the literature, there are two benchmark mechanisms to update a dictionary. The first approach, such as the MOD algorithm, is characterized by searching for the optimal codewords while fixing the sparse coefficients. In the second approach, represented by the K-SVD method, one codeword and the related sparse coefficients are simultaneously updated while all other codewords and coefficients remain unchanged. We propose a novel framework that generalizes the aforementioned two methods. The unique feature of our approach is that one can update an arbitrary set of codewords and the corresponding sparse coefficients simultaneously: when sparse coefficients are fixed, the underlying optimization problem is similar to that in the MOD algorithm; when only one codeword is selected for update, it can be proved that the proposed algorithm is equivalent to the K-SVD method; and more importantly, our method allows us to update all codewords and all sparse coefficients simultaneously, hence the term simultaneous codeword optimization (SimCO). Under the proposed framework, we design two algorithms, namely, primitive and regularized SimCO. We implement these two algorithms based on a simple gradient descent mechanism. Simulations are provided to demonstrate the performance of the proposed algorithms, as compared with two baseline algorithms MOD and K-SVD. Results show that regularized SimCO is particularly appealing in terms of both learning performance and running speed.Comment: 13 page

    Orthogonal procrustes analysis for dictionary learning in sparse linear representation

    Get PDF
    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability

    Single Image Super-Resolution through Sparse Representation via Coupled Dictionary learning

    Get PDF
    Abstract-Single Image Super-Resolution (SISR) through sparse representation has received much attention in the past decade due to significant development in sparse coding algorithms. However, recovering high-frequency textures is a major bottleneck of existing SISR algorithms.  Considering this, dictionary learning approaches are to be utilized to extract high-frequency textures which improve SISR performance significantly. In this paper, we have proposed the SISR algorithm through sparse representation which involves learning of Low Resolution (LR) and High Resolution (HR) dictionaries simultaneously from the training set. The idea of training coupled dictionaries preserves correlation between HR and LR patches to enhance the Super-resolved image. To demonstrate the effectiveness of the proposed algorithm, a visual comparison is made with popular SISR algorithms and also quantified through quality metrics. The proposed algorithm outperforms compared to existing SISR algorithms qualitatively and quantitatively as shown in experimental results. Furthermore, the performance of our algorithm is remarkable for a smaller training set which involves lesser computational complexity. Therefore, the proposed approach is proven to be superior based upon visual comparisons and quality metrics and have noticeable results at reduced computational complexity

    Assessment of sparse-based inpainting for retinal vessel removal

    Full text link
    [EN] Some important eye diseases, like macular degeneration or diabetic retinopathy, can induce changes visible on the retina, for example as lesions. Segmentation of lesions or extraction of textural features from the fundus images are possible steps towards automatic detection of such diseases which could facilitate screening as well as provide support for clinicians. For the task of detecting significant features, retinal blood vessels are considered as being interference on the retinal images. If these blood vessel structures could be suppressed, it might lead to a more accurate segmentation of retinal lesions as well as a better extraction of textural features to be used for pathology detection. This work proposes the use of sparse representations and dictionary learning techniques for retinal vessel inpainting. The performance of the algorithm is tested for greyscale and RGB images from the DRIVE and STARE public databases, employing different neighbourhoods and sparseness factors. Moreover, a comparison with the most common inpainting family, diffusion-based methods, is carried out. For this purpose, two different ways of assessing the quality of the inpainting are presented and used to evaluate the results of the non-artificial inpainting, i.e. where a reference image does not exist. The results suggest that the use of sparse-based inpainting performs very well for retinal blood vessels removal which will be useful for the future detection and classification of eye diseases. (C) 2017 Elsevier B.V. All rights reserved.This work was supported by NILS Science and Sustainability Programme (014-ABEL-IM-2013) and by the Ministerio de Economia y Competitividad of Spain, Project ACRIMA (TIN2013-46751-R). The work of Adrian Colomer has been supported by the Spanish Government under the FPI Grant BES-2014-067889.Colomer, A.; Naranjo Ornedo, V.; Engan, K.; Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication. 59:73-82. https://doi.org/10.1016/j.image.2017.03.018S73825

    Sparsity Based Poisson Denoising with Dictionary Learning

    Full text link
    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive i.i.d. Gaussian noise, for which many effective algorithms are available. However, in a low SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. A recent work by Salmon et al. took this route, proposing a patch-based exponential image representation model based on GMM (Gaussian mixture model), leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR, and achieving state-of-the-art results in cases of low SNR.Comment: 13 pages, 9 figure
    corecore