6,714 research outputs found

    Subspace System Identification via Weighted Nuclear Norm Optimization

    Full text link
    We present a subspace system identification method based on weighted nuclear norm approximation. The weight matrices used in the nuclear norm minimization are the same weights as used in standard subspace identification methods. We show that the inclusion of the weights improves the performance in terms of fit on validation data. As a second benefit, the weights reduce the size of the optimization problems that need to be solved. Experimental results from randomly generated examples as well as from the Daisy benchmark collection are reported. The key to an efficient implementation is the use of the alternating direction method of multipliers to solve the optimization problem.Comment: Submitted to IEEE Conference on Decision and Contro

    Low-Rank Inducing Norms with Optimality Interpretations

    Full text link
    Optimization problems with rank constraints appear in many diverse fields such as control, machine learning and image analysis. Since the rank constraint is non-convex, these problems are often approximately solved via convex relaxations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which includes the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all examples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semi-definite programs. Moreover, these norms have cheaply computable proximal mappings, which makes it possible to also solve problems of large size using first-order methods

    New Light on Infrared Problems: Sectors, Statistics, Spectrum and All That

    Full text link
    Within the general setting of algebraic quantum field theory, a new approach to the analysis of the physical state space of a theory is presented; it covers theories with long range forces, such as quantum electrodynamics. Making use of the notion of charge class, which generalizes the concept of superselection sector, infrared problems are avoided. In fact, on this basis one can determine and classify in a systematic manner the proper charge content of a theory, the statistics of the corresponding states and their spectral properties. A key ingredient in this approach is the fact that in real experiments the arrow of time gives rise to a Lorentz invariant infrared cutoff of a purely geometric nature.Comment: 9 pages, 5 figures. Talk given at XVIIth International Congress on Mathematical Physics, Aalborg, 6-11 August 2012; to appear in the proceedings. version 2: unchanged, but layout problems solve

    Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery

    Full text link
    PCA is one of the most widely used dimension reduction techniques. A related easier problem is "subspace learning" or "subspace estimation". Given relatively clean data, both are easily solved via singular value decomposition (SVD). The problem of subspace learning or PCA in the presence of outliers is called robust subspace learning or robust PCA (RPCA). For long data sequences, if one tries to use a single lower dimensional subspace to represent the data, the required subspace dimension may end up being quite large. For such data, a better model is to assume that it lies in a low-dimensional subspace that can change over time, albeit gradually. The problem of tracking such data (and the subspaces) while being robust to outliers is called robust subspace tracking (RST). This article provides a magazine-style overview of the entire field of robust subspace learning and tracking. In particular solutions for three problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition (S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an entire data vector is either an outlier or an inlier. The S+LR formulation instead assumes that outliers occur on only a few data vector indices and hence are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201

    Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

    Full text link
    The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization

    The Noisy Power Method: A Meta Algorithm with Applications

    Full text link
    We provide a new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrix-vector multiplication. The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis. Our general analysis subsumes several existing ad-hoc convergence bounds and resolves a number of open problems in multiple applications including streaming PCA and privacy-preserving singular vector computation.Comment: NIPS 201
    corecore