2 research outputs found
Sparse Regularization via Convex Analysis
Sparse approximate solutions to linear equations are classically obtained via
L1 norm regularized least squares, but this method often underestimates the
true solution. As an alternative to the L1 norm, this paper proposes a class of
non-convex penalty functions that maintain the convexity of the least squares
cost function to be minimized, and avoids the systematic underestimation
characteristic of L1 norm regularization. The proposed penalty function is a
multivariate generalization of the minimax-concave (MC) penalty. It is defined
in terms of a new multivariate generalization of the Huber function, which in
turn is defined via infimal convolution. The proposed sparse-regularized least
squares cost function can be minimized by proximal algorithms comprising simple
computations
Minimization of Transformed Penalty: Theory, Difference of Convex Function Algorithm, and Robust Application in Compressed Sensing
We study the minimization problem of a non-convex sparsity promoting penalty
function, the transformed (TL1), and its application in compressed
sensing (CS). The TL1 penalty interpolates and norms through a
nonnegative parameter , similar to with ,
and is known to satisfy unbiasedness, sparsity and Lipschitz continuity
properties. We first consider the constrained minimization problem and discuss
the exact recovery of norm minimal solution based on the null space
property (NSP). We then prove the stable recovery of norm minimal
solution if the sensing matrix satisfies a restricted isometry property
(RIP). Next, we present difference of convex algorithms for TL1 (DCATL1) in
computing TL1-regularized constrained and unconstrained problems in CS. The
inner loop concerns an minimization problem on which we employ the
Alternating Direction Method of Multipliers (ADMM). For the unconstrained
problem, we prove convergence of DCATL1 to a stationary point satisfying the
first order optimality condition. In numerical experiments, we identify the
optimal value , and compare DCATL1 with other CS algorithms on two classes
of sensing matrices: Gaussian random matrices and over-sampled discrete cosine
transform matrices (DCT). We find that for both classes of sensing matrices,
the performance of DCATL1 algorithm (initiated with minimization) always
ranks near the top (if not the top), and is the most robust choice insensitive
to the conditioning of the sensing matrix . DCATL1 is also competitive in
comparison with DCA on other non-convex penalty functions commonly used in
statistics with two hyperparameters.Comment: to appear in Mathematical Programming, Series