2,037 research outputs found

    A New Computational Method for the Sparsest Solutions to Systems of Linear Equations

    Get PDF

    RSP-Based Analysis for Sparsest and Least â„“1\ell_1-Norm Solutions to Underdetermined Linear Systems

    Full text link
    Recently, the worse-case analysis, probabilistic analysis and empirical justification have been employed to address the fundamental question: When does â„“1\ell_1-minimization find the sparsest solution to an underdetermined linear system? In this paper, a deterministic analysis, rooted in the classic linear programming theory, is carried out to further address this question. We first identify a necessary and sufficient condition for the uniqueness of least â„“1\ell_1-norm solutions to linear systems. From this condition, we deduce that a sparsest solution coincides with the unique least â„“1\ell_1-norm solution to a linear system if and only if the so-called \emph{range space property} (RSP) holds at this solution. This yields a broad understanding of the relationship between â„“0\ell_0- and â„“1\ell_1-minimization problems. Our analysis indicates that the RSP truly lies at the heart of the relationship between these two problems. Through RSP-based analysis, several important questions in this field can be largely addressed. For instance, how to efficiently interpret the gap between the current theory and the actual numerical performance of â„“1\ell_1-minimization by a deterministic analysis, and if a linear system has multiple sparsest solutions, when does â„“1\ell_1-minimization guarantee to find one of them? Moreover, new matrix properties (such as the \emph{RSP of order KK} and the \emph{Weak-RSP of order KK}) are introduced in this paper, and a new theory for sparse signal recovery based on the RSP of order KK is established

    Finding sparse solutions of systems of polynomial equations via group-sparsity optimization

    Get PDF
    The paper deals with the problem of finding sparse solutions to systems of polynomial equations possibly perturbed by noise. In particular, we show how these solutions can be recovered from group-sparse solutions of a derived system of linear equations. Then, two approaches are considered to find these group-sparse solutions. The first one is based on a convex relaxation resulting in a second-order cone programming formulation which can benefit from efficient reweighting techniques for sparsity enhancement. For this approach, sufficient conditions for the exact recovery of the sparsest solution to the polynomial system are derived in the noiseless setting, while stable recovery results are obtained for the noisy case. Though lacking a similar analysis, the second approach provides a more computationally efficient algorithm based on a greedy strategy adding the groups one-by-one. With respect to previous work, the proposed methods recover the sparsest solution in a very short computing time while remaining at least as accurate in terms of the probability of success. This probability is empirically analyzed to emphasize the relationship between the ability of the methods to solve the polynomial system and the sparsity of the solution.Comment: Journal of Global Optimization (2014) to appea

    A fast approach for overcomplete sparse decomposition based on smoothed L0 norm

    Full text link
    In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined Sparse Component Analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm tries to directly minimize the L0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes, see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing erroneousl

    An efficient null space inexact Newton method for hydraulic simulation of water distribution networks

    Full text link
    Null space Newton algorithms are efficient in solving the nonlinear equations arising in hydraulic analysis of water distribution networks. In this article, we propose and evaluate an inexact Newton method that relies on partial updates of the network pipes' frictional headloss computations to solve the linear systems more efficiently and with numerical reliability. The update set parameters are studied to propose appropriate values. Different null space basis generation schemes are analysed to choose methods for sparse and well-conditioned null space bases resulting in a smaller update set. The Newton steps are computed in the null space by solving sparse, symmetric positive definite systems with sparse Cholesky factorizations. By using the constant structure of the null space system matrices, a single symbolic factorization in the Cholesky decomposition is used multiple times, reducing the computational cost of linear solves. The algorithms and analyses are validated using medium to large-scale water network models.Comment: 15 pages, 9 figures, Preprint extension of Abraham and Stoianov, 2015 (https://dx.doi.org/10.1061/(ASCE)HY.1943-7900.0001089), September 2015. Includes extended exposition, additional case studies and new simulations and analysi

    Nonlinear Basis Pursuit

    Full text link
    In compressive sensing, the basis pursuit algorithm aims to find the sparsest solution to an underdetermined linear equation system. In this paper, we generalize basis pursuit to finding the sparsest solution to higher order nonlinear systems of equations, called nonlinear basis pursuit. In contrast to the existing nonlinear compressive sensing methods, the new algorithm that solves the nonlinear basis pursuit problem is convex and not greedy. The novel algorithm enables the compressive sensing approach to be used for a broader range of applications where there are nonlinear relationships between the measurements and the unknowns
    • …
    corecore