57,519 research outputs found

    Schatten-pp Quasi-Norm Regularized Matrix Optimization via Iterative Reweighted Singular Value Minimization

    Full text link
    In this paper we study general Schatten-pp quasi-norm (SPQN) regularized matrix minimization problems. In particular, we first introduce a class of first-order stationary points for them, and show that the first-order stationary points introduced in [11] for an SPQN regularized vectorvector minimization problem are equivalent to those of an SPQN regularized matrixmatrix minimization reformulation. We also show that any local minimizer of the SPQN regularized matrix minimization problems must be a first-order stationary point. Moreover, we derive lower bounds for nonzero singular values of the first-order stationary points and hence also of the local minimizers of the SPQN regularized matrix minimization problems. The iterative reweighted singular value minimization (IRSVM) methods are then proposed to solve these problems, whose subproblems are shown to have a closed-form solution. In contrast to the analogous methods for the SPQN regularized vectorvector minimization problems, the convergence analysis of these methods is significantly more challenging. We develop a novel approach to establishing the convergence of these methods, which makes use of the expression of a specific solution of their subproblems and avoids the intricate issue of finding the explicit expression for the Clarke subdifferential of the objective of their subproblems. In particular, we show that any accumulation point of the sequence generated by the IRSVM methods is a first-order stationary point of the problems. Our computational results demonstrate that the IRSVM methods generally outperform some recently developed state-of-the-art methods in terms of solution quality and/or speed.Comment: This paper has been withdrawn by the author due to major revision and correction

    Risk hull method and regularization by projections of ill-posed inverse problems

    Full text link
    We study a standard method of regularization by projections of the linear inverse problem Y=Af+ϵY=Af+\epsilon, where ϵ\epsilon is a white Gaussian noise, and AA is a known compact operator with singular values converging to zero with polynomial decay. The unknown function ff is recovered by a projection method using the singular value decomposition of AA. The bandwidth choice of this projection regularization is governed by a data-driven procedure which is based on the principle of risk hull minimization. We provide nonasymptotic upper bounds for the mean square risk of this method and we show, in particular, that in numerical simulations this approach may substantially improve the classical method of unbiased risk estimation.Comment: Published at http://dx.doi.org/10.1214/009053606000000542 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Robust Principal Component Pursuit via Inexact Alternating Minimization on Matrix Manifolds

    Get PDF
    Robust principal component pursuit (RPCP) refers to a decomposition of a data matrix into a low-rank component and a sparse component. In this work, instead of invoking a convex-relaxation model based on the nuclear norm and the `1 -norm as is typically done in this context, RPCP is solved by considering a least-squares problem subject to rank and cardinality constraints. An inexact alternating minimization scheme, with guaranteed global convergence, is employed to solve the resulting constrained minimization problem. In particular, the low-rank matrix subproblem is resolved inexactly by a tailored Riemannian optimization technique, which favorably avoids singular value decompositions in full dimen- sion. For the overall method, a corresponding q-linear convergence theory is established. The numerical experiments show that the newly proposed method compares competitively with a popular convex-relaxation based approach.Peer Reviewe

    Scalable Low-Rank Tensor Learning for Spatiotemporal Traffic Data Imputation

    Full text link
    Missing value problem in spatiotemporal traffic data has long been a challenging topic, in particular for large-scale and high-dimensional data with complex missing mechanisms and diverse degrees of missingness. Recent studies based on tensor nuclear norm have demonstrated the superiority of tensor learning in imputation tasks by effectively characterizing the complex correlations/dependencies in spatiotemporal data. However, despite the promising results, these approaches do not scale well to large data tensors. In this paper, we focus on addressing the missing data imputation problem for large-scale spatiotemporal traffic data. To achieve both high accuracy and efficiency, we develop a scalable tensor learning model -- Low-Tubal-Rank Smoothing Tensor Completion (LSTC-Tubal) -- based on the existing framework of Low-Rank Tensor Completion, which is well-suited for spatiotemporal traffic data that is characterized by multidimensional structure of location×\times time of day ×\times day. In particular, the proposed LSTC-Tubal model involves a scalable tensor nuclear norm minimization scheme by integrating linear unitary transformation. Therefore, tensor nuclear norm minimization can be solved by singular value thresholding on the transformed matrix of each day while the day-to-day correlation can be effectively preserved by the unitary transform matrix. We compare LSTC-Tubal with state-of-the-art baseline models, and find that LSTC-Tubal can achieve competitive accuracy with a significantly lower computational cost. In addition, the LSTC-Tubal will also benefit other tasks in modeling large-scale spatiotemporal traffic data, such as network-level traffic forecasting

    Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

    Full text link
    The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization

    A Simplified Approach to Recovery Conditions for Low Rank Matrices

    Get PDF
    Recovering sparse vectors and low-rank matrices from noisy linear measurements has been the focus of much recent research. Various reconstruction algorithms have been studied, including â„“1\ell_1 and nuclear norm minimization as well as â„“p\ell_p minimization with p<1p<1. These algorithms are known to succeed if certain conditions on the measurement map are satisfied. Proofs of robust recovery for matrices have so far been much more involved than in the vector case. In this paper, we show how several robust classes of recovery conditions can be extended from vectors to matrices in a simple and transparent way, leading to the best known restricted isometry and nullspace conditions for matrix recovery. Our results rely on the ability to "vectorize" matrices through the use of a key singular value inequality.Comment: 6 pages, This is a modified version of a paper submitted to ISIT 2011; Proc. Intl. Symp. Info. Theory (ISIT), Aug 201
    • …
    corecore