7,780 research outputs found

    Learning Model-Based Sparsity via Projected Gradient Descent

    Full text link
    Several convex formulation methods have been proposed previously for statistical estimation with structured sparsity as the prior. These methods often require a carefully tuned regularization parameter, often a cumbersome or heuristic exercise. Furthermore, the estimate that these methods produce might not belong to the desired sparsity model, albeit accurately approximating the true parameter. Therefore, greedy-type algorithms could often be more desirable in estimating structured-sparse parameters. So far, these greedy methods have mostly focused on linear statistical models. In this paper we study the projected gradient descent with non-convex structured-sparse parameter model as the constraint set. Should the cost function have a Stable Model-Restricted Hessian the algorithm produces an approximation for the desired minimizer. As an example we elaborate on application of the main results to estimation in Generalized Linear Model

    Parallel Selective Algorithms for Big Data Optimization

    Full text link
    We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e., sequential) ones, as well as virtually all possibilities "in between" with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.Comment: This work is an extended version of the conference paper that has been presented at IEEE ICASSP'14. The first and the second author contributed equally to the paper. This revised version contains new numerical results on non convex quadratic problem

    Weighted Thresholding and Nonlinear Approximation

    Full text link
    We present a new method for performing nonlinear approximation with redundant dictionaries. The method constructs an m−m-term approximation of the signal by thresholding with respect to a weighted version of its canonical expansion coefficients, thereby accounting for dependency between the coefficients. The main result is an associated strong Jackson embedding, which provides an upper bound on the corresponding reconstruction error. To complement the theoretical results, we compare the proposed method to the pure greedy method and the Windowed-Group Lasso by denoising music signals with elements from a Gabor dictionary.Comment: 22 pages, 3 figure
    • …
    corecore