124 research outputs found

    Efficient implementation of truncated reweighting low-rank matrix approximation.

    Get PDF
    The weighted nuclear norm minimization and truncated nuclear norm minimization are two well-known low-rank constraint for visual applications. In this paper, by integrating their advantages into a unified formulation, we find a better weighting strategy, namely truncated reweighting norm minimization (TRNM), which provides better approximation to the target rank for some specific task. Albeit nonconvex and truncated, we prove that TRNM is equivalent to certain weighted quadratic programming problems, whose global optimum can be accessed by the newly presented reweighting singular value thresholding operator. More importantly, we design a computationally efficient optimization algorithm, namely momentum update and rank propagation (MURP), for the general TRNM regularized problems. The individual advantages of MURP include, first, reducing iterations through nonmonotonic search, and second, mitigating computational cost by reducing the size of target matrix. Furthermore, the descent property and convergence of MURP are proven. Finally, two practical models, i.e., Matrix Completion Problem via TRNM (MCTRNM) and Space Clustering Model via TRNM (SCTRNM), are presented for visual applications. Extensive experimental results show that our methods achieve better performance, both qualitatively and quantitatively, compared with several state-of-the-art algorithms

    Krylov Methods for Low-Rank Regularization

    Get PDF
    This paper introduces new solvers for the computation of low-rank approximate solutions to large-scale linear problems, with a particular focus on the regularization of linear inverse problems. Although Krylov methods incorporating explicit projections onto low-rank subspaces are already used for well-posed systems that arise from discretizing stochastic or time-dependent PDEs, we are mainly concerned with algorithms that solve the so-called nuclear norm regularized problem, where a suitable nuclear norm penalization on the solution is imposed alongside a fit-to-data term expressed in the 2-norm: this has the effect of implicitly enforcing low-rank solutions. By adopting an iteratively reweighted norm approach, the nuclear norm regularized problem is reformulated as a sequence of quadratic problems, which can then be efficiently solved using Krylov methods, giving rise to an inner-outer iteration scheme. Our approach differs from the other solvers available in the literature in that: (a) Kronecker product properties are exploited to define the reweighted 2-norm penalization terms; (b) efficient preconditioned Krylov methods replace gradient (projection) methods; (c) the regularization parameter can be efficiently and adaptively set along the iterations. Furthermore, we reformulate within the framework of flexible Krylov methods both the new inner-outer methods for nuclear norm regularization and some of the existing Krylov methods incorporating low-rank projections. This results in an even more computationally efficient (but heuristic) strategy, that does not rely on an inner-outer iteration scheme. Numerical experiments show that our new solvers are competitive with other state-of-the-art solvers for low-rank problems, and deliver reconstructions of increased quality with respect to other classical Krylov methods

    Truncated Nuclear Norm Minimization for Image Restoration Based On Iterative Support Detection

    Get PDF
    Recovering a large matrix from limited measurements is a challenging task arising in many real applications, such as image inpainting, compressive sensing and medical imaging, and this kind of problems are mostly formulated as low-rank matrix approximation problems. Due to the rank operator being non-convex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation and the low-rank matrix recovery problem is solved through minimization of the nuclear norm regularized problem. However, a major limitation of nuclear norm minimization is that all the singular values are simultaneously minimized and the rank may not be well approximated \cite{hu2012fast}. Correspondingly, in this paper, we propose a new multi-stage algorithm, which makes use of the concept of Truncated Nuclear Norm Regularization (TNNR) proposed in \citep{hu2012fast} and Iterative Support Detection (ISD) proposed in \citep{wang2010sparse} to overcome the above limitation. Besides matrix completion problems considered in \citep{hu2012fast}, the proposed method can be also extended to the general low-rank matrix recovery problems. Extensive experiments well validate the superiority of our new algorithms over other state-of-the-art methods
    • …
    corecore