203 research outputs found

    Regularization-free estimation in trace regression with symmetric positive semidefinite matrices

    Full text link
    Over the past few years, trace regression models have received considerable attention in the context of matrix completion, quantum state tomography, and compressed sensing. Estimation of the underlying matrix from regularization-based approaches promoting low-rankedness, notably nuclear norm regularization, have enjoyed great popularity. In the present paper, we argue that such regularization may no longer be necessary if the underlying matrix is symmetric positive semidefinite (\textsf{spd}) and the design satisfies certain conditions. In this situation, simple least squares estimation subject to an \textsf{spd} constraint may perform as well as regularization-based approaches with a proper choice of the regularization parameter, which entails knowledge of the noise level and/or tuning. By contrast, constrained least squares estimation comes without any tuning parameter and may hence be preferred due to its simplicity

    Projection Methods in Sparse and Low Rank Feasibility

    Get PDF
    In this thesis, we give an analysis of fixed point algorithms involving projections onto closed, not necessarily convex, subsets of finite dimensional vector spaces. These methods are used in applications such as imaging science, signal processing, and inverse problems. The tools used in the analysis place this work at the intersection of optimization and variational analysis. Based on the underlying optimization problems, this work is devided into two main parts. The first one is the compressed sensing problem. Because the problem is NP-hard, we relax it to a feasibility problem with two sets, namely, the set of vectors with at most s nonzero entries and, for a linear mapping M the affine subspace B of vectors satisfying Mx=p for p given. This problem will be referred to as the sparse-affine-feasibility problem. For the Douglas-Rachford algorithm, we give the proof of linear convergence to a fixed point in the case of a feasibility problem of two affine subspaces. It allows us to conclude a result of local linear convergence of the Douglas-Rachford algorithm in the sparse affine feasibility problem. Proceeding, we name sufficient conditions for the alternating projections algorithm to converge to the intersection of an affine subspace with lower level sets of point symmetric, lower semicontinuous, subadditive functions. This implies convergence of alternating projections to a solution of the sparse affine feasibility problem. Together with a result of local linear convergence of the alternating projections algorithm, this allows us to deduce linear convergence after finitely many steps for any initial point of a sequence of points generated by the alternating projections algorithm. The second part of this dissertation deals with the minimization of the rank of matrices satisfying a set of linear equations. This problem will be called rank-constrained-affine-feasibility problem. The motivation for the analysis of the rank minimization problem comes from the physical application of phase retrieval and a reformulation of the same as a rank minimization problem. We show that, locally, the method of alternating projections must converge at linear rate to a solution of the rank constrained affine feasibility problem

    Newton-type Alternating Minimization Algorithm for Convex Optimization

    Full text link
    We propose NAMA (Newton-type Alternating Minimization Algorithm) for solving structured nonsmooth convex optimization problems where the sum of two functions is to be minimized, one being strongly convex and the other composed with a linear mapping. The proposed algorithm is a line-search method over a continuous, real-valued, exact penalty function for the corresponding dual problem, which is computed by evaluating the augmented Lagrangian at the primal points obtained by alternating minimizations. As a consequence, NAMA relies on exactly the same computations as the classical alternating minimization algorithm (AMA), also known as the dual proximal gradient method. Under standard assumptions the proposed algorithm possesses strong convergence properties, while under mild additional assumptions the asymptotic convergence is superlinear, provided that the search directions are chosen according to quasi-Newton formulas. Due to its simplicity, the proposed method is well suited for embedded applications and large-scale problems. Experiments show that using limited-memory directions in NAMA greatly improves the convergence speed over AMA and its accelerated variant
    • …
    corecore