39,040 research outputs found

    Iterative Log Thresholding

    Full text link
    Sparse reconstruction approaches using the re-weighted l1-penalty have been shown, both empirically and theoretically, to provide a significant improvement in recovering sparse signals in comparison to the l1-relaxation. However, numerical optimization of such penalties involves solving problems with l1-norms in the objective many times. Using the direct link of reweighted l1-penalties to the concave log-regularizer for sparsity, we derive a simple prox-like algorithm for the log-regularized formulation. The proximal splitting step of the algorithm has a closed form solution, and we call the algorithm 'log-thresholding' in analogy to soft thresholding for the l1-penalty. We establish convergence results, and demonstrate that log-thresholding provides more accurate sparse reconstructions compared to both soft and hard thresholding. Furthermore, the approach can be directly extended to optimization over matrices with penalty for rank (i.e. the nuclear norm penalty and its re-weigthed version), where we suggest a singular-value log-thresholding approach.Comment: 5 pages, 4 figure

    Sparsity Enhanced Decision Feedback Equalization

    Full text link
    For single-carrier systems with frequency domain equalization, decision feedback equalization (DFE) performs better than linear equalization and has much lower computational complexity than sequence maximum likelihood detection. The main challenge in DFE is the feedback symbol selection rule. In this paper, we give a theoretical framework for a simple, sparsity based thresholding algorithm. We feed back multiple symbols in each iteration, so the algorithm converges fast and has a low computational cost. We show how the initial solution can be obtained via convex relaxation instead of linear equalization, and illustrate the impact that the choice of the initial solution has on the bit error rate performance of our algorithm. The algorithm is applicable in several existing wireless communication systems (SC-FDMA, MC-CDMA, MIMO-OFDM). Numerical results illustrate significant performance improvement in terms of bit error rate compared to the MMSE solution

    A Singular Value Thresholding Algorithm for Matrix Completion

    Get PDF
    This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices {X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms

    Wavelet-based denoising by customized thresholding

    Get PDF
    The problem of estimating a signal that is corrupted by additive noise has been of interest to many researchers for practical as well as theoretical reasons. Many of the traditional denoising methods have been using linear methods such as the Wiener filtering. Recently, nonlinear methods, especially those based on wavelets have become increasingly popular, due to a number of advantages over the linear methods. It has been shown that wavelet-thresholding has near-optimal properties in the minimax sense, and guarantees better rate of convergence, despite its simplicity. Even though much work has been done in the field of wavelet-thresholding, most of it was focused on statistical modeling of the wavelet coefficients and the optimal choice of the thresholds. In this paper, we propose a custom thresholding function which can improve the denoised results significantly. Simulation results are given to demonstrate the advantage of the new thresholding function
    corecore