11,237 research outputs found
Discrete and Continuous Sparse Recovery Methods and Their Applications
Low dimensional signal processing has drawn an increasingly broad amount of attention in the past decade, because prior information about a low-dimensional space can be exploited to aid in the recovery of the signal of interest. Among all the different forms of low di- mensionality, in this dissertation we focus on the synthesis and analysis models of sparse recovery. This dissertation comprises two major topics. For the first topic, we discuss the synthesis model of sparse recovery and consider the dictionary mismatches in the model. We further introduce a continuous sparse recovery to eliminate the existing off-grid mismatches for DOA estimation. In the second topic, we focus on the analysis model, with an emphasis on efficient algorithms and performance analysis.
In considering the sparse recovery method with structured dictionary mismatches for the synthesis model, we exploit the joint sparsity between the mismatch parameters and original sparse signal. We demonstrate that by exploiting this information, we can obtain a robust reconstruction under mild conditions on the sensing matrix. This model is very useful for
radar and passive array applications. We propose several efficient algorithms to solve the joint sparse recovery problem. Using numerical examples, we demonstrate that our proposed algorithms outperform several methods in the literature. We further extend the mismatch model to a continuous sparse model, using the mathematical theory of super resolution. Statistical analysis shows the robustness of the proposed algorithm. A number-detection algorithm is also proposed for the co-prime arrays. By using numerical examples, we show that continuous sparse recovery further improves the DOA estimation accuracy, over both the joint sparse method and also MUSIC with spatial smoothing.
In the second topic, we visit the corresponding analysis model of sparse recovery. Instead of assuming a sparse decomposition of the original signal, the analysis model focuses on the existence of a linear transformation which can make the original signal sparse. In this work we use a monotone version of the fast iterative shrinkage- thresholding algorithm (MFISTA) to yield efficient algorithms to solve the sparse recovery. We examine two widely used relaxation techniques, namely smoothing and decomposition, to relax the optimization. We show that although these two techniques are equivalent in their objective functions, the smoothing technique converges faster than the decomposition technique. We also compute the performance guarantee for the analysis model when a LASSO type of reconstruction is performed. By using numerical examples, we are able to show that the proposed algorithm is more efficient than other state of the art algorithms
A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems
We consider a class of nonconvex nonsmooth optimization problems whose
objective is the sum of a smooth function and a finite number of nonnegative
proper closed possibly nonsmooth functions (whose proximal mappings are easy to
compute), some of which are further composed with linear maps. This kind of
problems arises naturally in various applications when different regularizers
are introduced for inducing simultaneous structures in the solutions. Solving
these problems, however, can be challenging because of the coupled nonsmooth
functions: the corresponding proximal mapping can be hard to compute so that
standard first-order methods such as the proximal gradient algorithm cannot be
applied efficiently. In this paper, we propose a successive
difference-of-convex approximation method for solving this kind of problems. In
this algorithm, we approximate the nonsmooth functions by their Moreau
envelopes in each iteration. Making use of the simple observation that Moreau
envelopes of nonnegative proper closed functions are continuous {\em
difference-of-convex} functions, we can then approximately minimize the
approximation function by first-order methods with suitable majorization
techniques. These first-order methods can be implemented efficiently thanks to
the fact that the proximal mapping of {\em each} nonsmooth function is easy to
compute. Under suitable assumptions, we prove that the sequence generated by
our method is bounded and any accumulation point is a stationary point of the
objective. We also discuss how our method can be applied to concrete
applications such as nonconvex fused regularized optimization problems and
simultaneously structured matrix optimization problems, and illustrate the
performance numerically for these two specific applications
- …