79,610 research outputs found

    Coherence-based Partial Exact Recovery Condition for OMP/OLS

    Get PDF
    We address the exact recovery of the support of a k-sparse vector with Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS) in a noiseless setting. We consider the scenario where OMP/OLS have selected good atoms during the first l iterations (l<k) and derive a new sufficient and worst-case necessary condition for their success in k steps. Our result is based on the coherence \mu of the dictionary and relaxes Tropp's well-known condition \mu<1/(2k-1) to the case where OMP/OLS have a partial knowledge of the support

    A Compressed Sampling and Dictionary Learning Framework for WDM-Based Distributed Fiber Sensing

    Full text link
    We propose a compressed sampling and dictionary learning framework for fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is generated from a model for the reflected sensor signal. Imperfect prior knowledge is considered in terms of uncertain local and global parameters. To estimate a sparse representation and the dictionary parameters, we present an alternating minimization algorithm that is equipped with a pre-processing routine to handle dictionary coherence. The support of the obtained sparse signal indicates the reflection delays, which can be used to measure impairments along the sensing fiber. The performance is evaluated by simulations and experimental data for a fiber sensor system with common core architecture.Comment: Accepted for publication in Journal of the Optical Society of America A [ \copyright\ 2017 Optical Society of America.]. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modifications of the content of this paper are prohibite

    Exact Recovery Conditions for Sparse Representations with Partial Support Information

    Get PDF
    We address the exact recovery of a k-sparse vector in the noiseless setting when some partial information on the support is available. This partial information takes the form of either a subset of the true support or an approximate subset including wrong atoms as well. We derive a new sufficient and worst-case necessary (in some sense) condition for the success of some procedures based on lp-relaxation, Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS). Our result is based on the coherence "mu" of the dictionary and relaxes the well-known condition mu<1/(2k-1) ensuring the recovery of any k-sparse vector in the non-informed setup. It reads mu<1/(2k-g+b-1) when the informed support is composed of g good atoms and b wrong atoms. We emphasize that our condition is complementary to some restricted-isometry based conditions by showing that none of them implies the other. Because this mutual coherence condition is common to all procedures, we carry out a finer analysis based on the Null Space Property (NSP) and the Exact Recovery Condition (ERC). Connections are established regarding the characterization of lp-relaxation procedures and OMP in the informed setup. First, we emphasize that the truncated NSP enjoys an ordering property when p is decreased. Second, the partial ERC for OMP (ERC-OMP) implies in turn the truncated NSP for the informed l1 problem, and the truncated NSP for p<1.Comment: arXiv admin note: substantial text overlap with arXiv:1211.728

    Approximation errors of online sparsification criteria

    Full text link
    Many machine learning frameworks, such as resource-allocating networks, kernel-based methods, Gaussian processes, and radial-basis-function networks, require a sparsification scheme in order to address the online learning paradigm. For this purpose, several online sparsification criteria have been proposed to restrict the model definition on a subset of samples. The most known criterion is the (linear) approximation criterion, which discards any sample that can be well represented by the already contributing samples, an operation with excessive computational complexity. Several computationally efficient sparsification criteria have been introduced in the literature, such as the distance, the coherence and the Babel criteria. In this paper, we provide a framework that connects these sparsification criteria to the issue of approximating samples, by deriving theoretical bounds on the approximation errors. Moreover, we investigate the error of approximating any feature, by proposing upper-bounds on the approximation error for each of the aforementioned sparsification criteria. Two classes of features are described in detail, the empirical mean and the principal axes in the kernel principal component analysis.Comment: 10 page
    • …
    corecore