27 research outputs found

    Solution of linear ill-posed problems using random dictionaries

    Full text link
    In the present paper we consider application of overcomplete dictionaries to solution of general ill-posed linear inverse problems. In the context of regression problems, there has been enormous amount of effort to recover an unknown function using such dictionaries. One of the most popular methods, lasso and its versions, is based on minimizing empirical likelihood and unfortunately, requires stringent assumptions on the dictionary, the, so called, compatibility conditions. Though compatibility conditions are hard to satisfy, it is well known that this can be accomplished by using random dictionaries. In the present paper, we show how one can apply random dictionaries to solution of ill-posed linear inverse problems. We put a theoretical foundation under the suggested methodology and study its performance via simulations

    Collaborative Filtering via Group-Structured Dictionary Learning

    Get PDF
    Structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning. In this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems. Our extensive numerical experiments demonstrate that the presented technique outperforms its state-of-the-art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements.Comment: A compressed version of the paper has been accepted for publication at the 10th International Conference on Latent Variable Analysis and Source Separation (LVA/ICA 2012

    Model selection in regression under structural constraints

    Full text link
    The paper considers model selection in regression under the additional structural constraints on admissible models where the number of potential predictors might be even larger than the available sample size. We develop a Bayesian formalism as a natural tool for generating a wide class of model selection criteria based on penalized least squares estimation with various complexity penalties associated with a prior on a model size. The resulting criteria are adaptive to structural constraints. We establish the upper bound for the quadratic risk of the resulting MAP estimator and the corresponding lower bound for the minimax risk over a set of admissible models of a given size. We then specify the class of priors (and, therefore, the class of complexity penalties) where for the "nearly-orthogonal" design the MAP estimator is asymptotically at least nearly-minimax (up to a log-factor) simultaneously over an entire range of sparse and dense setups. Moreover, when the numbers of admissible models are "small" (e.g., ordered variable selection) or, on the opposite, for the case of complete variable selection, the proposed estimator achieves the exact minimax rates.Comment: arXiv admin note: text overlap with arXiv:0912.438

    Compressed sensing performance bounds under Poisson noise

    Full text link
    This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical 2\ell_2--1\ell_1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the signal-dependent part of the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.Comment: 12 pages, 3 pdf figures; accepted for publication in IEEE Transactions on Signal Processin

    Simultaneous Codeword Optimization (SimCO) for Dictionary Update and Learning

    Get PDF
    We consider the data-driven dictionary learning problem. The goal is to seek an over-complete dictionary from which every training signal can be best approximated by a linear combination of only a few codewords. This task is often achieved by iteratively executing two operations: sparse coding and dictionary update. In the literature, there are two benchmark mechanisms to update a dictionary. The first approach, such as the MOD algorithm, is characterized by searching for the optimal codewords while fixing the sparse coefficients. In the second approach, represented by the K-SVD method, one codeword and the related sparse coefficients are simultaneously updated while all other codewords and coefficients remain unchanged. We propose a novel framework that generalizes the aforementioned two methods. The unique feature of our approach is that one can update an arbitrary set of codewords and the corresponding sparse coefficients simultaneously: when sparse coefficients are fixed, the underlying optimization problem is similar to that in the MOD algorithm; when only one codeword is selected for update, it can be proved that the proposed algorithm is equivalent to the K-SVD method; and more importantly, our method allows us to update all codewords and all sparse coefficients simultaneously, hence the term simultaneous codeword optimization (SimCO). Under the proposed framework, we design two algorithms, namely, primitive and regularized SimCO. We implement these two algorithms based on a simple gradient descent mechanism. Simulations are provided to demonstrate the performance of the proposed algorithms, as compared with two baseline algorithms MOD and K-SVD. Results show that regularized SimCO is particularly appealing in terms of both learning performance and running speed.Comment: 13 page

    Efficient Compressive Sampling of Spatially Sparse Fields in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSN), i.e. networks of autonomous, wireless sensing nodes spatially deployed over a geographical area, are often faced with acquisition of spatially sparse fields. In this paper, we present a novel bandwidth/energy efficient CS scheme for acquisition of spatially sparse fields in a WSN. The paper contribution is twofold. Firstly, we introduce a sparse, structured CS matrix and we analytically show that it allows accurate reconstruction of bidimensional spatially sparse signals, such as those occurring in several surveillance application. Secondly, we analytically evaluate the energy and bandwidth consumption of our CS scheme when it is applied to data acquisition in a WSN. Numerical results demonstrate that our CS scheme achieves significant energy and bandwidth savings wrt state-of-the-art approaches when employed for sensing a spatially sparse field by means of a WSN.Comment: Submitted to EURASIP Journal on Advances in Signal Processin

    Solution of linear ill-posed problems using overcomplete dictionaries

    Full text link
    In the present paper we consider application of overcomplete dictionaries to solution of general ill-posed linear inverse problems. Construction of an adaptive optimal solution for such problems usually relies either on a singular value decomposition or representation of the solution via an orthonormal basis. The shortcoming of both approaches lies in the fact that, in many situations, neither the eigenbasis of the linear operator nor a standard orthonormal basis constitutes an appropriate collection of functions for sparse representation of the unknown function. In the context of regression problems, there have been an enormous amount of effort to recover an unknown function using an overcomplete dictionary. One of the most popular methods, Lasso, is based on minimizing the empirical likelihood and requires stringent assumptions on the dictionary, the, so called, compatibility conditions. While these conditions may be satisfied for the original dictionary functions, they usually do not hold for their images due to contraction imposed by the linear operator. In what follows, we bypass this difficulty by a novel approach which is based on inverting each of the dictionary functions and matching the resulting expansion to the true function, thus, avoiding unrealistic assumptions on the dictionary and using Lasso in a predictive setting. We examine both the white noise and the observational model formulations and also discuss how exact inverse images of the dictionary functions can be replaced by their approximate counterparts. Furthermore, we show how the suggested methodology can be extended to the problem of estimation of a mixing density in a continuous mixture. For all the situations listed above, we provide the oracle inequalities for the risk in a finite sample setting. Simulation studies confirm good computational properties of the Lasso-based technique

    A new fault diagnosis method using deep belief network and compressive sensing

    Get PDF
    Compressive sensing provides a new idea for machinery monitoring, which greatly reduces the burden on data transmission. After that, the compressed signal will be used for fault diagnosis by feature extraction and fault classification. However, traditional fault diagnosis heavily depends on the prior knowledge and requires a signal reconstruction which will cost great time consumption. For this problem, a deep belief network (DBN) is used here for fault detection directly on compressed signal. This is the first time DBN is combined with the compressive sensing. The PCA analysis shows that DBN has successfully separated different features. The DBN method which is tested on compressed gearbox signal, achieves 92.5 % accuracy for 25 % compressed signal. We compare the DBN on both compressed and reconstructed signal, and find that the DBN using compressed signal not only achieves better accuracies, but also costs less time when compression ratio is less than 0.35. Moreover, the results have been compared with other classification methods
    corecore