120 research outputs found
Sparse Optimization Problem with s-difference Regularization
In this paper, a s-difference type regularization for sparse recovery problem
is proposed, which is the difference of the normal penalty function R(x) and
its corresponding struncated function R (xs). First, we show the equivalent
conditions between the L0 constrained problem and the unconstrained
s-difference penalty regularized problem. Next, we choose the forward-backward
splitting (FBS) method to solve the nonconvex regularizes function and further
derive some closed-form solutions for the proximal mapping of the s-difference
regularization with some commonly used R(x), which makes the FBS easy and fast.
We also show that any cluster point of the sequence generated by the proposed
algorithm converges to a stationary point. Numerical experiments demonstrate
the efficiency of the proposed s-difference regularization in comparison with
some other existing penalty functions.Comment: 20 pages, 5 figure
On some common compressive sensing recovery algorithms and applications - Review paper
Compressive Sensing, as an emerging technique in signal processing is
reviewed in this paper together with its common applications. As an alternative
to the traditional signal sampling, Compressive Sensing allows a new
acquisition strategy with significantly reduced number of samples needed for
accurate signal reconstruction. The basic ideas and motivation behind this
approach are provided in the theoretical part of the paper. The commonly used
algorithms for missing data reconstruction are presented. The Compressive
Sensing applications have gained significant attention leading to an intensive
growth of signal processing possibilities. Hence, some of the existing
practical applications assuming different types of signals in real-world
scenarios are described and analyzed as well.Comment: submitted to Facta Universitatis Scientific Journal, Series:
Electronics and Energetics, March 201
Sparse Generalized Eigenvalue Problem via Smooth Optimization
In this paper, we consider an -norm penalized formulation of the
generalized eigenvalue problem (GEP), aimed at extracting the leading sparse
generalized eigenvector of a matrix pair. The formulation involves maximization
of a discontinuous nonconcave objective function over a nonconvex constraint
set, and is therefore computationally intractable. To tackle the problem, we
first approximate the -norm by a continuous surrogate function. Then
an algorithm is developed via iteratively majorizing the surrogate function by
a quadratic separable function, which at each iteration reduces to a regular
generalized eigenvalue problem. A preconditioned steepest ascent algorithm for
finding the leading generalized eigenvector is provided. A systematic way based
on smoothing is proposed to deal with the "singularity issue" that arises when
a quadratic function is used to majorize the nondifferentiable surrogate
function. For sparse GEPs with special structure, algorithms that admit a
closed-form solution at every iteration are derived. Numerical experiments show
that the proposed algorithms match or outperform existing algorithms in terms
of computational complexity and support recovery
Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) - The Method
The sparsity of natural signals and images in a transform domain or
dictionary has been extensively exploited in several applications such as
compression, denoising and inverse problems. More recently, data-driven
adaptation of synthesis dictionaries has shown promise in many applications
compared to fixed or analytical dictionary models. However, dictionary learning
problems are typically non-convex and NP-hard, and the usual alternating
minimization approaches for these problems are often computationally expensive,
with the computations dominated by the NP-hard synthesis sparse coding step. In
this work, we investigate an efficient method for "norm"-based
dictionary learning by first approximating the training data set with a sum of
sparse rank-one matrices and then using a block coordinate descent approach to
estimate the unknowns. The proposed block coordinate descent algorithm involves
efficient closed-form solutions. In particular, the sparse coding step involves
a simple form of thresholding. We provide a convergence analysis for the
proposed block coordinate descent approach. Our numerical experiments show the
promising performance and significant speed-ups provided by our method over the
classical K-SVD scheme in sparse signal representation and image denoising.Comment: This work is cited by the IEEE Transactions on Computational Imaging
Paper arXiv:1511.06333 (DOI: 10.1109/TCI.2017.2697206
A Unified Approach to Sparse Signal Processing
A unified view of sparse signal processing is presented in tutorial form by
bringing together various fields. For each of these fields, various algorithms
and techniques, which have been developed to leverage sparsity, are described
succinctly. The common benefits of significant reduction in sampling rate and
processing manipulations are revealed.
The key applications of sparse signal processing are sampling, coding,
spectral estimation, array processing, component analysis, and multipath
channel estimation. In terms of reconstruction algorithms, linkages are made
with random sampling, compressed sensing and rate of innovation. The redundancy
introduced by channel coding in finite/real Galois fields is then related to
sampling with similar reconstruction algorithms. The methods of Prony,
Pisarenko, and MUSIC are next discussed for sparse frequency domain
representations. Specifically, the relations of the approach of Prony to an
annihilating filter and Error Locator Polynomials in coding are emphasized; the
Pisarenko and MUSIC methods are further improvements of the Prony method. Such
spectral estimation methods is then related to multi-source location and DOA
estimation in array processing. The notions of sparse array beamforming and
sparse sensor networks are also introduced. Sparsity in unobservable source
signals is also shown to facilitate source separation in SCA; the algorithms
developed in this area are also widely used in compressed sensing. Finally, the
multipath channel estimation problem is shown to have a sparse formulation;
algorithms similar to sampling and coding are used to estimate OFDM channels.Comment: 43 pages, 40 figures, 15 table
Methods for Sparse and Low-Rank Recovery under Simplex Constraints
The de-facto standard approach of promoting sparsity by means of
-regularization becomes ineffective in the presence of simplex
constraints, i.e.,~the target is known to have non-negative entries summing up
to a given constant. The situation is analogous for the use of nuclear norm
regularization for low-rank recovery of Hermitian positive semidefinite
matrices with given trace. In the present paper, we discuss several strategies
to deal with this situation, from simple to more complex. As a starting point,
we consider empirical risk minimization (ERM). It follows from existing theory
that ERM enjoys better theoretical properties w.r.t.~prediction and
-estimation error than -regularization. In light of this, we
argue that ERM combined with a subsequent sparsification step like thresholding
is superior to the heuristic of using -regularization after dropping
the sum constraint and subsequent normalization.
At the next level, we show that any sparsity-promoting regularizer under
simplex constraints cannot be convex. A novel sparsity-promoting regularization
scheme based on the inverse or negative of the squared -norm is
proposed, which avoids shortcomings of various alternative methods from the
literature. Our approach naturally extends to Hermitian positive semidefinite
matrices with given trace. Numerical studies concerning compressed sensing,
sparse mixture density estimation, portfolio optimization and quantum state
tomography are used to illustrate the key points of the paper
Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems
Solving inverse problems with iterative algorithms is popular, especially for
large data. Due to time constraints, the number of possible iterations is
usually limited, potentially affecting the achievable accuracy. Given an error
one is willing to tolerate, an important question is whether it is possible to
modify the original iterations to obtain faster convergence to a minimizer
achieving the allowed error without increasing the computational cost of each
iteration considerably. Relying on recent recovery techniques developed for
settings in which the desired signal belongs to some low-dimensional set, we
show that using a coarse estimate of this set may lead to faster convergence at
the cost of an additional reconstruction error related to the accuracy of the
set approximation. Our theory ties to recent advances in sparse recovery,
compressed sensing, and deep learning. Particularly, it may provide a possible
explanation to the successful approximation of the l1-minimization solution by
neural networks with layers representing iterations, as practiced in the
learned iterative shrinkage-thresholding algorithm (LISTA).Comment: To appear in IEEE Transactions on Signal Processin
Sparse Channel Reconstruction With Nonconvex Regularizer via DC Programming for Massive MIMO Systems
Sparse channel estimation for massive multiple-input multiple-output systems
has drawn much attention in recent years. The required pilots are substantially
reduced when the sparse channel state vectors can be reconstructed from a few
numbers of measurements. A popular approach for sparse reconstruction is to
solve the least-squares problem with a convex regularization. However, the
convex regularizer is either too loose to force sparsity or lead to biased
estimation. In this paper, the sparse channel reconstruction is solved by
minimizing the least-squares objective with a nonconvex regularizer, which can
exactly express the sparsity constraint and avoid introducing serious bias in
the solution. A novel algorithm is proposed for solving the resulting nonconvex
optimization via the difference of convex functions programming and the
gradient projection descent. Simulation results show that the proposed
algorithm is fast and accurate, and it outperforms the existing sparse recovery
algorithms in terms of reconstruction errors.Comment: 2020 IEEE Global Communications Conferenc
Sparsity Based Methods for Overparameterized Variational Problems
Two complementary approaches have been extensively used in signal and image
processing leading to novel results, the sparse representation methodology and
the variational strategy. Recently, a new sparsity based model has been
proposed, the cosparse analysis framework, which may potentially help in
bridging sparse approximation based methods to the traditional total-variation
minimization. Based on this, we introduce a sparsity based framework for
solving overparameterized variational problems. The latter has been used to
improve the estimation of optical flow and also for general denoising of
signals and images. However, the recovery of the space varying parameters
involved was not adequately addressed by traditional variational methods. We
first demonstrate the efficiency of the new framework for one dimensional
signals in recovering a piecewise linear and polynomial function. Then, we
illustrate how the new technique can be used for denoising and segmentation of
images.Comment: 16 pages, 11 figure
Minimization of Transformed Penalty: Theory, Difference of Convex Function Algorithm, and Robust Application in Compressed Sensing
We study the minimization problem of a non-convex sparsity promoting penalty
function, the transformed (TL1), and its application in compressed
sensing (CS). The TL1 penalty interpolates and norms through a
nonnegative parameter , similar to with ,
and is known to satisfy unbiasedness, sparsity and Lipschitz continuity
properties. We first consider the constrained minimization problem and discuss
the exact recovery of norm minimal solution based on the null space
property (NSP). We then prove the stable recovery of norm minimal
solution if the sensing matrix satisfies a restricted isometry property
(RIP). Next, we present difference of convex algorithms for TL1 (DCATL1) in
computing TL1-regularized constrained and unconstrained problems in CS. The
inner loop concerns an minimization problem on which we employ the
Alternating Direction Method of Multipliers (ADMM). For the unconstrained
problem, we prove convergence of DCATL1 to a stationary point satisfying the
first order optimality condition. In numerical experiments, we identify the
optimal value , and compare DCATL1 with other CS algorithms on two classes
of sensing matrices: Gaussian random matrices and over-sampled discrete cosine
transform matrices (DCT). We find that for both classes of sensing matrices,
the performance of DCATL1 algorithm (initiated with minimization) always
ranks near the top (if not the top), and is the most robust choice insensitive
to the conditioning of the sensing matrix . DCATL1 is also competitive in
comparison with DCA on other non-convex penalty functions commonly used in
statistics with two hyperparameters.Comment: to appear in Mathematical Programming, Series
- β¦