721 research outputs found

    On the monotone and primal-dual active set schemes for ℓp\ell^p-type problems, p∈(0,1]p \in (0,1]

    Full text link
    Nonsmooth nonconvex optimization problems involving the ℓp\ell^p quasi-norm, p∈(0,1]p \in (0, 1], of a linear map are considered. A monotonically convergent scheme for a regularized version of the original problem is developed and necessary optimality conditions for the original problem in the form of a complementary system amenable for computation are given. Then an algorithm for solving the above mentioned necessary optimality conditions is proposed. It is based on a combination of the monotone scheme and a primal-dual active set strategy. The performance of the two algorithms is studied by means of a series of numerical tests in different cases, including optimal control problems, fracture mechanics and microscopy image reconstruction

    Nonlinear Basis Pursuit

    Full text link
    In compressive sensing, the basis pursuit algorithm aims to find the sparsest solution to an underdetermined linear equation system. In this paper, we generalize basis pursuit to finding the sparsest solution to higher order nonlinear systems of equations, called nonlinear basis pursuit. In contrast to the existing nonlinear compressive sensing methods, the new algorithm that solves the nonlinear basis pursuit problem is convex and not greedy. The novel algorithm enables the compressive sensing approach to be used for a broader range of applications where there are nonlinear relationships between the measurements and the unknowns

    Nonlinear Compressive Particle Filtering

    Full text link
    Many systems for which compressive sensing is used today are dynamical. The common approach is to neglect the dynamics and see the problem as a sequence of independent problems. This approach has two disadvantages. Firstly, the temporal dependency in the state could be used to improve the accuracy of the state estimates. Secondly, having an estimate for the state and its support could be used to reduce the computational load of the subsequent step. In the linear Gaussian setting, compressive sensing was recently combined with the Kalman filter to mitigate above disadvantages. In the nonlinear dynamical case, compressive sensing can not be used and, if the state dimension is high, the particle filter would perform poorly. In this paper we combine one of the most novel developments in compressive sensing, nonlinear compressive sensing, with the particle filter. We show that the marriage of the two is essential and that neither the particle filter or nonlinear compressive sensing alone gives a satisfying solution.Comment: Accepted to CDC 201

    Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms

    Full text link
    This paper treats the problem of minimizing a general continuously differentiable function subject to sparsity constraints. We present and analyze several different optimality criteria which are based on the notions of stationarity and coordinate-wise optimality. These conditions are then used to derive three numerical algorithms aimed at finding points satisfying the resulting optimality criteria: the iterative hard thresholding method and the greedy and partial sparse-simplex methods. The first algorithm is essentially a gradient projection method while the remaining two algorithms are of coordinate descent type. The theoretical convergence of these methods and their relations to the derived optimality conditions are studied. The algorithms and results are illustrated by several numerical examples.Comment: submitted to SIAM Optimizatio

    Simultaneously Structured Models with Application to Sparse and Low-rank Matrices

    Get PDF
    The topic of recovery of a structured model given a small number of linear observations has been well-studied in recent years. Examples include recovering sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and low-rank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and low-rank. Often norms that promote each individual structure are known, and allow for recovery using an order-wise optimal number of measurements (e.g., â„“1\ell_1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multi-objective optimization with these norms, then we can do no better, order-wise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and low-rank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the â„“1\ell_1 and nuclear norms requires many more measurements. This proves an order-wise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structure-inducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and low-rank tensor completion.Comment: 38 pages, 9 figure
    • …
    corecore