439 research outputs found

    Concentration of the Frobenius norm of generalized matrix inverses

    Get PDF
    Revised/condensed/renamed version of preprint "Beyond Moore-Penrose Part II: The Sparse Pseudoinverse"International audienceIn many applications it is useful to replace the Moore-Penrose pseudoinverse (MPP) by a different generalized inverse with more favorable properties. We may want, for example, to have many zero entries, but without giving up too much of the stability of the MPP. One way to quantify stability is by how much the Frobenius norm of a generalized inverse exceeds that of the MPP. In this paper we derive finite-size concentration bounds for the Frobenius norm of ℓp\ell^p-minimal general inverses of iid Gaussian matrices, with 1≀p≀21 \leq p \leq 2. For p=1p = 1 we prove exponential concentration of the Frobenius norm of the sparse pseudoinverse; for p=2p = 2, we get a similar concentration bound for the MPP. Our proof is based on the convex Gaussian min-max theorem, but unlike previous applications which give asymptotic results, we derive finite-size bounds

    Beyond Moore-Penrose Part II: The Sparse Pseudoinverse

    Get PDF
    This is the second part of a two-paper series on generalized inverses that minimize matrix norms. In Part II we focus on generalized inverses that are minimizers of entrywise p norms whose main representative is the sparse pseudoinverse for p=1p = 1. We are motivated by the idea to replace the Moore-Penrose pseudoinverse by a sparser generalized inverse which is in some sense well-behaved. Sparsity implies that it is faster to apply the resulting matrix; well-behavedness would imply that we do not lose much in stability with respect to the least-squares performance of the MPP. We first address questions of uniqueness and non-zero count of (putative) sparse pseu-doinverses. We show that a sparse pseudoinverse is generically unique, and that it indeed reaches optimal sparsity for almost all matrices. We then turn to proving our main stability result: finite-size concentration bounds for the Frobenius norm of p-minimal inverses for 11 \lep p \le2 2. Our proof is based on tools from convex analysis and random matrix theory, in particular the recently developed convex Gaussian min-max theorem. Along the way we prove several results about sparse representations and convex programming that were known folklore, but of which we could find no proof

    Spectral analysis of large reflexive generalized inverse and Moore-Penrose inverse matrices

    Full text link
    A reflexive generalized inverse and the Moore-Penrose inverse are often confused in statistical literature but in fact they have completely different behaviour in case the population covariance matrix is not a multiple of identity. In this paper, we study the spectral properties of a reflexive generalized inverse and of the Moore-Penrose inverse of the sample covariance matrix. The obtained results are used to assess the difference in the asymptotic behaviour of their eigenvalues.Comment: 13 pages, 1 figure, a letter/short articl

    Iterative Row Sampling

    Full text link
    There has been significant interest and progress recently in algorithms that solve regression problems involving tall and thin matrices in input sparsity time. These algorithms find shorter equivalent of a n*d matrix where n >> d, which allows one to solve a poly(d) sized problem instead. In practice, the best performances are often obtained by invoking these routines in an iterative fashion. We show these iterative methods can be adapted to give theoretical guarantees comparable and better than the current state of the art. Our approaches are based on computing the importances of the rows, known as leverage scores, in an iterative manner. We show that alternating between computing a short matrix estimate and finding more accurate approximate leverage scores leads to a series of geometrically smaller instances. This gives an algorithm that runs in O(nnz(A)+dω+Ξϔ−2)O(nnz(A) + d^{\omega + \theta} \epsilon^{-2}) time for any Ξ>0\theta > 0, where the dω+Ξd^{\omega + \theta} term is comparable to the cost of solving a regression problem on the small approximation. Our results are built upon the close connection between randomized matrix algorithms, iterative methods, and graph sparsification.Comment: 26 pages, 2 figure

    Nonparametric estimation of covariance functions by model selection

    Get PDF
    We propose a model selection approach for covariance estimation of a multi-dimensional stochastic process. Under very general assumptions, observing i.i.d replications of the process at fixed observation points, we construct an estimator of the covariance function by expanding the process onto a collection of basis functions. We study the non asymptotic property of this estimate and give a tractable way of selecting the best estimator among a possible set of candidates. The optimality of the procedure is proved via an oracle inequality which warrants that the best model is selected

    Algebraic properties of Manin matrices 1

    Get PDF
    We study a class of matrices with noncommutative entries, which were first considered by Yu. I. Manin in 1988 in relation with quantum group theory. They are defined as "noncommutative endomorphisms" of a polynomial algebra. More explicitly their defining conditions read: 1) elements in the same column commute; 2) commutators of the cross terms are equal: [Mij,Mkl]=[Mkj,Mil][M_{ij}, M_{kl}] = [M_{kj}, M_{il}] (e.g. [M11,M22]=[M21,M12][M_{11},M_{22}] = [M_{21},M_{12}]). The basic claim is that despite noncommutativity many theorems of linear algebra hold true for Manin matrices in a form identical to that of the commutative case. Moreover in some examples the converse is also true. The present paper gives a complete list and detailed proofs of algebraic properties of Manin matrices known up to the moment; many of them are new. In particular we present the formulation in terms of matrix (Leningrad) notations; provide complete proofs that an inverse to a M.m. is again a M.m. and for the Schur formula for the determinant of a block matrix; we generalize the noncommutative Cauchy-Binet formulas discovered recently [arXiv:0809.3516], which includes the classical Capelli and related identities. We also discuss many other properties, such as the Cramer formula for the inverse matrix, the Cayley-Hamilton theorem, Newton and MacMahon-Wronski identities, Plucker relations, Sylvester's theorem, the Lagrange-Desnanot-Lewis Caroll formula, the Weinstein-Aronszajn formula, some multiplicativity properties for the determinant, relations with quasideterminants, calculation of the determinant via Gauss decomposition, conjugation to the second normal (Frobenius) form, and so on and so forth. We refer to [arXiv:0711.2236] for some applications.Comment: 80 page

    Signals on Graphs: Uncertainty Principle and Sampling

    Full text link
    In many applications, the observations can be represented as a signal defined over the vertices of a graph. The analysis of such signals requires the extension of standard signal processing tools. In this work, first, we provide a class of graph signals that are maximally concentrated on the graph domain and on its dual. Then, building on this framework, we derive an uncertainty principle for graph signals and illustrate the conditions for the recovery of band-limited signals from a subset of samples. We show an interesting link between uncertainty principle and sampling and propose alternative signal recovery algorithms, including a generalization to frame-based reconstruction methods. After showing that the performance of signal recovery algorithms is significantly affected by the location of samples, we suggest and compare a few alternative sampling strategies. Finally, we provide the conditions for perfect recovery of a useful signal corrupted by sparse noise, showing that this problem is also intrinsically related to vertex-frequency localization properties.Comment: This article is the revised version submitted to the IEEE Transactions on Signal Processing on May, 2016; first revision was submitted on January, 2016; original manuscript was submitted on July, 2015. The work includes 16 pages, 8 figure
    • 

    corecore