5 research outputs found

    Dual-density-based reweighted â„“1\ell_{1}-algorithms for a class of â„“0\ell_{0}-minimization problems

    Get PDF
    The optimization problem with sparsity arises in many areas of science and engineering such as compressed sensing, image processing, statistical learning and data sparse approximation. In this paper, we study the dual-density-based reweighted â„“1\ell_{1}-algorithms for a class of â„“0\ell_{0}-minimization models which can be used to model a wide range of practical problems. This class of algorithms is based on certain convex relaxations of the reformulation of the underlying â„“0\ell_{0}-minimization model. Such a reformulation is a special bilevel optimization problem which, in theory, is equivalent to the underlying â„“0\ell_{0}-minimization problem under the assumption of strict complementarity. Some basic properties of these algorithms are discussed, and numerical experiments have been carried out to demonstrate the efficiency of the proposed algorithms. Comparison of numerical performances of the proposed methods and the classic reweighted â„“1\ell_1-algorithms has also been made in this paper

    Constructing new weighted â„“<sub>1</sub>-algorithms for the sparsest points of polyhedral sets

    Get PDF
    The â„“0-minimization problem that seeks the sparsest point of a polyhedral set is a long-standing, challenging problem in the fields of signal and image processing, numerical linear algebra, and mathematical optimization. The weighted â„“1-method is one of the most plausible methods for solving this problem. In this paper we develop a new weighted â„“1-method through the strict complementarity theory of linear programs. More specifically, we show that locating the sparsest point of a polyhedral set can be achieved by seeking the densest possible slack variable of the dual problem of weighted â„“1-minimization. As a result, â„“0-minimization can be transformed, in theory, to â„“0-maximization in dual space through some weight. This theoretical result provides a basis and an incentive to develop a new weighted â„“1-algorithm, which is remarkably distinct from existing sparsity-seeking methods. The weight used in our algorithm is computed via a certain convex optimization instead of being determined locally at an iterate. The guaranteed performance of this algorithm is shown under some conditions, and the numerical performance of the algorithm has been demonstrated by empirical simulations. </jats:p
    corecore