231 research outputs found

    The Convergence Guarantees of a Non-convex Approach for Sparse Recovery

    Full text link
    In the area of sparse recovery, numerous researches hint that non-convex penalties might induce better sparsity than convex ones, but up until now those corresponding non-convex algorithms lack convergence guarantees from the initial solution to the global optimum. This paper aims to provide performance guarantees of a non-convex approach for sparse recovery. Specifically, the concept of weak convexity is incorporated into a class of sparsity-inducing penalties to characterize the non-convexity. Borrowing the idea of the projected subgradient method, an algorithm is proposed to solve the non-convex optimization problem. In addition, a uniform approximate projection is adopted in the projection step to make this algorithm computationally tractable for large scale problems. The convergence analysis is provided in the noisy scenario. It is shown that if the non-convexity of the penalty is below a threshold (which is in inverse proportion to the distance between the initial solution and the sparse signal), the recovered solution has recovery error linear in both the step size and the noise term. Numerical simulations are implemented to test the performance of the proposed approach and verify the theoretical analysis.Comment: 33 pages, 7 figure

    Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations

    Full text link
    Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery--- least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table

    On the Performance Bound of Sparse Estimation with Sensing Matrix Perturbation

    Full text link
    This paper focusses on the sparse estimation in the situation where both the the sensing matrix and the measurement vector are corrupted by additive Gaussian noises. The performance bound of sparse estimation is analyzed and discussed in depth. Two types of lower bounds, the constrained Cram\'{e}r-Rao bound (CCRB) and the Hammersley-Chapman-Robbins bound (HCRB), are discussed. It is shown that the situation with sensing matrix perturbation is more complex than the one with only measurement noise. For the CCRB, its closed-form expression is deduced. It demonstrates a gap between the maximal and nonmaximal support cases. It is also revealed that a gap lies between the CCRB and the MSE of the oracle pseudoinverse estimator, but it approaches zero asymptotically when the problem dimensions tend to infinity. For a tighter bound, the HCRB, despite of the difficulty in obtaining a simple expression for general sensing matrix, a closed-form expression in the unit sensing matrix case is derived for a qualitative study of the performance bound. It is shown that the gap between the maximal and nonmaximal cases is eliminated for the HCRB. Numerical simulations are performed to verify the theoretical results in this paper.Comment: 32 pages, 8 Figures, 1 Tabl

    Local Measurement and Reconstruction for Noisy Graph Signals

    Full text link
    The emerging field of signal processing on graph plays a more and more important role in processing signals and information related to networks. Existing works have shown that under certain conditions a smooth graph signal can be uniquely reconstructed from its decimation, i.e., data associated with a subset of vertices. However, in some potential applications (e.g., sensor networks with clustering structure), the obtained data may be a combination of signals associated with several vertices, rather than the decimation. In this paper, we propose a new concept of local measurement, which is a generalization of decimation. Using the local measurements, a local-set-based method named iterative local measurement reconstruction (ILMR) is proposed to reconstruct bandlimited graph signals. It is proved that ILMR can reconstruct the original signal perfectly under certain conditions. The performance of ILMR against noise is theoretically analyzed. The optimal choice of local weights and a greedy algorithm of local set partition are given in the sense of minimizing the expected reconstruction error. Compared with decimation, the proposed local measurement sampling and reconstruction scheme is more robust in noise existing scenarios.Comment: 24 pages, 6 figures, 2 tables, journal manuscrip

    Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection

    Full text link
    A recursive algorithm named Zero-point Attracting Projection (ZAP) is proposed recently for sparse signal reconstruction. Compared with the reference algorithms, ZAP demonstrates rather good performance in recovery precision and robustness. However, any theoretical analysis about the mentioned algorithm, even a proof on its convergence, is not available. In this work, a strict proof on the convergence of ZAP is provided and the condition of convergence is put forward. Based on the theoretical analysis, it is further proved that ZAP is non-biased and can approach the sparse solution to any extent, with the proper choice of step-size. Furthermore, the case of inaccurate measurements in noisy scenario is also discussed. It is proved that disturbance power linearly reduces the recovery precision, which is predictable but not preventable. The reconstruction deviation of pp-compressible signal is also provided. Finally, numerical simulations are performed to verify the theoretical analysis.Comment: 29 pages, 6 figure
    • …
    corecore