2 research outputs found

    Sparse Regularization via Convex Analysis

    Full text link
    Sparse approximate solutions to linear equations are classically obtained via L1 norm regularized least squares, but this method often underestimates the true solution. As an alternative to the L1 norm, this paper proposes a class of non-convex penalty functions that maintain the convexity of the least squares cost function to be minimized, and avoids the systematic underestimation characteristic of L1 norm regularization. The proposed penalty function is a multivariate generalization of the minimax-concave (MC) penalty. It is defined in terms of a new multivariate generalization of the Huber function, which in turn is defined via infimal convolution. The proposed sparse-regularized least squares cost function can be minimized by proximal algorithms comprising simple computations

    Minimization of Transformed L1L_1 Penalty: Theory, Difference of Convex Function Algorithm, and Robust Application in Compressed Sensing

    Full text link
    We study the minimization problem of a non-convex sparsity promoting penalty function, the transformed l1l_1 (TL1), and its application in compressed sensing (CS). The TL1 penalty interpolates l0l_0 and l1l_1 norms through a nonnegative parameter a∈(0,+∞)a \in (0,+\infty), similar to lpl_p with p∈(0,1]p \in (0,1], and is known to satisfy unbiasedness, sparsity and Lipschitz continuity properties. We first consider the constrained minimization problem and discuss the exact recovery of l0l_0 norm minimal solution based on the null space property (NSP). We then prove the stable recovery of l0l_0 norm minimal solution if the sensing matrix AA satisfies a restricted isometry property (RIP). Next, we present difference of convex algorithms for TL1 (DCATL1) in computing TL1-regularized constrained and unconstrained problems in CS. The inner loop concerns an l1l_1 minimization problem on which we employ the Alternating Direction Method of Multipliers (ADMM). For the unconstrained problem, we prove convergence of DCATL1 to a stationary point satisfying the first order optimality condition. In numerical experiments, we identify the optimal value a=1a=1, and compare DCATL1 with other CS algorithms on two classes of sensing matrices: Gaussian random matrices and over-sampled discrete cosine transform matrices (DCT). We find that for both classes of sensing matrices, the performance of DCATL1 algorithm (initiated with l1l_1 minimization) always ranks near the top (if not the top), and is the most robust choice insensitive to the conditioning of the sensing matrix AA. DCATL1 is also competitive in comparison with DCA on other non-convex penalty functions commonly used in statistics with two hyperparameters.Comment: to appear in Mathematical Programming, Series
    corecore