3 research outputs found

    Support Recovery for Sparse Signals with Unknown Non-stationary Modulation

    Full text link
    The problem of estimating a sparse signal from low dimensional noisy observations arises in many applications, including super resolution, signal deconvolution, and radar imaging. In this paper, we consider a sparse signal model with non-stationary modulations, in which each dictionary atom contributing to the observations undergoes an unknown, distinct modulation. By applying the lifting technique, under the assumption that the modulating signals live in a common subspace, we recast this sparse recovery and non-stationary blind demodulation problem as the recovery of a column-wise sparse matrix from structured linear observations, and propose to solve it via block β„“1\ell_{1}-norm regularized quadratic minimization. Due to observation noise, the sparse signal and modulation process cannot be recovered exactly. Instead, we aim to recover the sparse support of the ground truth signal and bound the recovery errors of the signal's non-zero components and the modulation process. In particular, we derive sufficient conditions on the sample complexity and regularization parameter for exact support recovery and bound the recovery error on the support. Numerical simulations verify and support our theoretical findings, and we demonstrate the effectiveness of our model in the application of single molecule imaging.Comment: 13 pages, 8 figure

    Simultaneous Sparse Recovery and Blind Demodulation

    Full text link
    The task of finding a sparse signal decomposition in an overcomplete dictionary is made more complicated when the signal undergoes an unknown modulation (or convolution in the complementary Fourier domain). Such simultaneous sparse recovery and blind demodulation problems appear in many applications including medical imaging, super resolution, self-calibration, etc. In this paper, we consider a more general sparse recovery and blind demodulation problem in which each atom comprising the signal undergoes a distinct modulation process. Under the assumption that the modulating waveforms live in a known common subspace, we employ the lifting technique and recast this problem as the recovery of a column-wise sparse matrix from structured linear measurements. In this framework, we accomplish sparse recovery and blind demodulation simultaneously by minimizing the induced atomic norm, which in this problem corresponds to the block β„“1\ell_1 norm minimization. For perfect recovery in the noiseless case, we derive near optimal sample complexity bounds for Gaussian and random Fourier overcomplete dictionaries. We also provide bounds on recovering the column-wise sparse matrix in the noisy case. Numerical simulations illustrate and support our theoretical results.Comment: 16 pages, 10 figure

    Optimization and data-driven methods for signal processing

    Get PDF
    Includes bibliographical references.2021 Summer.By exploiting and leveraging the intrinsic properties of the observed signal, many signal processing and machine learning problems can be effectively solved by transforming them into optimization problems, which constitutes the first part of the thesis. The theoretical sample complexity for exact signal recovery and the recovery error bound with noisy observation can be derived for the optimization methods. However, it is not efficient for optimization methods to deal with high-dimensional signals and observation with the complex noise and non-stationary sensing process. Thus, in the second part of the thesis, we focus on applying data-driven methods using deep learning techniques to high-dimensional problems in order to verify and examine their efficiency and capability of handling the complex noise and complicated sensing process in real data. Finally, in the third part, we develop optimization-inspired data-driven methods for several inverse problems in signal processing and machine learning. Experiments show that the proposed optimization-inspired data-driven methods can achieve a comparable performance of the optimization methods, are extremely efficient in handling high-dimensional signals, and are very robust against the noise and complicated sensing process. This reveals the potential to design data-driven methods, following traditional optimization approaches, to robustly address challenging problems in signal processing and machine learning. \textit{Part 1: Optimization Methods}. In this part, we apply optimization methods to several inverse problems in signal processing and machine learning, including the signal and support recovery problems for the sparse signal with non-stationary modulation and parameter estimation of damped exponentials. For the inverse problems of sparse signal with non-stationary modulation, we derive the theoretical sufficient sample complexity for exact recovery and bound the signal recovery error in the noisy case. \textit{Part 2: Data-driven Methods}. In this part, we apply data-driven methods to several machine learning problems, which include recognizing the 3-dimensional (3D) chess pieces and classifying and clustering inlier correspondences of multiple objects in computer vision. The experiment results demonstrate the efficiency and robustness of data-driven methods against complex noise in the high-dimensional real data. \textit{Part 3: Optimization-inspired Data-driven Methods}. In this part, we develop data-driven methods based on the optimization techniques. By unfolding the optimization methods and making the parameters trainable, we obtain deep architectures that can achieve a fast approximation of the original optimization approaches and deal with signal models with the complicated sensing process that can not be modeled properly by optimization methods. We also design deep networks following the atomic norm optimization process for multiband signal identification and parameter estimation of contaminated damped exponentials
    corecore