472 research outputs found

    Fourier PCA and Robust Tensor Decomposition

    Full text link
    Fourier PCA is Principal Component Analysis of a matrix obtained from higher order derivatives of the logarithm of the Fourier transform of a distribution.We make this method algorithmic by developing a tensor decomposition method for a pair of tensors sharing the same vectors in rank-11 decompositions. Our main application is the first provably polynomial-time algorithm for underdetermined ICA, i.e., learning an n×mn \times m matrix AA from observations y=Axy=Ax where xx is drawn from an unknown product distribution with arbitrary non-Gaussian components. The number of component distributions mm can be arbitrarily higher than the dimension nn and the columns of AA only need to satisfy a natural and efficiently verifiable nondegeneracy condition. As a second application, we give an alternative algorithm for learning mixtures of spherical Gaussians with linearly independent means. These results also hold in the presence of Gaussian noise.Comment: Extensively revised; details added; minor errors corrected; exposition improve

    New Structured Matrix Methods for Real and Complex Polynomial Root-finding

    Full text link
    We combine the known methods for univariate polynomial root-finding and for computations in the Frobenius matrix algebra with our novel techniques to advance numerical solution of a univariate polynomial equation, and in particular numerical approximation of the real roots of a polynomial. Our analysis and experiments show efficiency of the resulting algorithms.Comment: 18 page

    Perturbation, extraction and refinement of invariant pairs for matrix polynomials

    Get PDF
    Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures

    Relative conditioning of linear systems of ODEs with respect to perturbation in the matrix of the system and in the initial value

    Get PDF
    The thesis is about how perturbations in the initial value y0y_{0} or in the coefficient matrix AA propagate along the solutions of nn-dimensional linear ordinary differential equations (ODE) \begin{equation*} \left\{ \begin{array}{l} y^\prime(t) =Ay(t),\ t\geq 0,\\ y(0)=y_0, \end{array} \right. \end{equation*} where ARn×nA \in \mathbb{R}^{n\times n} and y0Rny_0\in \mathbb{R}^n and y(t)=etAy0y(t)=e^{tA}y_0 is the solution of the equation.\\ We begin by considering a perturbation analysis when the initial value y0y_0 is perturbed to y~0\tilde{y}_0 with relative error \varepsilon=\frac{\norm{\tilde{y}_0-y_0}}{\norm{y_0}}, where \norm{\cdot} is a vector norm on Rn\mathbb{R}^n. Due to perturbation in the initial value, the solution y(t)=etAy0y(t)=e^{tA}y_0 is perturbed to y~(t)=etAy~0\tilde{y}(t)=e^{tA}\tilde{y}_0 with relative error δ(t)=etAy~0etAy0etAy0. \delta(t)=\frac{\left\| e^{tA}\tilde{y}_0-e^{tA}y_0\right\|}{\left\| e^{tA}y_0\right\|}. In other words, it is the (relative) conditioning of the problem \begin{equation*} y_0\mapsto e^{tA}y_0. \end{equation*} The relation between the error ε\varepsilon and the error δ(t)\delta(t) is described by three condition numbers namely: the condition number with the direction of perturbation, the condition number independent of the direction of perturbation and the condition number not only independent of the specific direction of perturbation but also independent of the specific initial value. How these condition numbers behave over a long period of time is an important aspect of the study. In the thesis, we move towards perturbations in the matrix as well as componentwise relative errors, rather than normwise relative errors, for perturbations of the initial value. About the first topic of the thesis, we look over how perturbations propagate along the solution of the ODE, when it is the coefficient matrix AA rather than the initial value that perturbs. In other words, the interest is to study the conditioning of the problem AetAy0. A\mapsto e^{tA}y_0. In case when the matrix AA perturbs to A~\tilde{A}, the relative error is given by \epsilon=\frac{\vertiii{\tilde{A}-A}}{\vertiii{A}} and the relative error in the solution of the ODE is given by ξ(t)=etA~y0etAy0etAy0. \xi(t)=\frac{\left\| e^{t\widetilde{A}}y_0-e^{tA}y_0\right\|}{\left\| e^{tA}y_0\right\|}. We introduce three condition numbers as before. The analysis of the condition numbers is done for a normal matrix AA and by making use of 22-norm. We give very useful upper and lower bounds on these three condition numbers and we study their asymptotic behavior as time goes to infinity. There could be cases when someone is interested in the relative errors δl(t)=y~l(t)yl(t)yl(t),l=1,,n, \delta_l(t)=\frac{\vert \tilde{y}_l(t)-y_l(t) \vert}{\vert y_l(t) \vert}, \quad l=1,\dots,n, of the perturbed solution components. With the motivation that componentwise relative errors give more information than the normwise relative error, we make a componentwise relative error analysis, which is the other topic of this thesis. We consider perturbations in initial value y0y_0 with normwise relative error ε\varepsilon and the relative error in the components of the solution of the equation given by δl(t)\delta_l(t). The interest is to study, for the ll-th component, the conditioning of the problem y0yl(t)=elTetAy0, y_0\mapsto y_l(t)=e_l^Te^{tA}y_0, where elTe_l^T is the ll-th vector of the canonical basis of Rn\mathbb{R}^n. We make this analysis for a diagonalizable matrix AA, diagonalizability being a generic situation for the matrix AA. We give two condition numbers in this part of the thesis and study their asymptotic behavior as time goes to infinity.The thesis is about how perturbations in the initial value y0y_{0} or in the coefficient matrix AA propagate along the solutions of nn-dimensional linear ordinary differential equations (ODE) \begin{equation*} \left\{ \begin{array}{l} y^\prime(t) =Ay(t),\ t\geq 0,\\ y(0)=y_0, \end{array} \right. \end{equation*} where ARn×nA \in \mathbb{R}^{n\times n} and y0Rny_0\in \mathbb{R}^n and y(t)=etAy0y(t)=e^{tA}y_0 is the solution of the equation.\\ We begin by considering a perturbation analysis when the initial value y0y_0 is perturbed to y~0\tilde{y}_0 with relative error \varepsilon=\frac{\norm{\tilde{y}_0-y_0}}{\norm{y_0}}, where \norm{\cdot} is a vector norm on Rn\mathbb{R}^n. Due to perturbation in the initial value, the solution y(t)=etAy0y(t)=e^{tA}y_0 is perturbed to y~(t)=etAy~0\tilde{y}(t)=e^{tA}\tilde{y}_0 with relative error δ(t)=etAy~0etAy0etAy0. \delta(t)=\frac{\left\| e^{tA}\tilde{y}_0-e^{tA}y_0\right\|}{\left\| e^{tA}y_0\right\|}. In other words, it is the (relative) conditioning of the problem \begin{equation*} y_0\mapsto e^{tA}y_0. \end{equation*} The relation between the error ε\varepsilon and the error δ(t)\delta(t) is described by three condition numbers namely: the condition number with the direction of perturbation, the condition number independent of the direction of perturbation and the condition number not only independent of the specific direction of perturbation but also independent of the specific initial value. How these condition numbers behave over a long period of time is an important aspect of the study. In the thesis, we move towards perturbations in the matrix as well as componentwise relative errors, rather than normwise relative errors, for perturbations of the initial value. About the first topic of the thesis, we look over how perturbations propagate along the solution of the ODE, when it is the coefficient matrix AA rather than the initial value that perturbs. In other words, the interest is to study the conditioning of the problem AetAy0. A\mapsto e^{tA}y_0. In case when the matrix AA perturbs to A~\tilde{A}, the relative error is given by \epsilon=\frac{\vertiii{\tilde{A}-A}}{\vertiii{A}} and the relative error in the solution of the ODE is given by ξ(t)=etA~y0etAy0etAy0. \xi(t)=\frac{\left\| e^{t\widetilde{A}}y_0-e^{tA}y_0\right\|}{\left\| e^{tA}y_0\right\|}. We introduce three condition numbers as before. The analysis of the condition numbers is done for a normal matrix AA and by making use of 22-norm. We give very useful upper and lower bounds on these three condition numbers and we study their asymptotic behavior as time goes to infinity. There could be cases when someone is interested in the relative errors δl(t)=y~l(t)yl(t)yl(t),l=1,,n, \delta_l(t)=\frac{\vert \tilde{y}_l(t)-y_l(t) \vert}{\vert y_l(t) \vert}, \quad l=1,\dots,n, of the perturbed solution components. With the motivation that componentwise relative errors give more information than the normwise relative error, we make a componentwise relative error analysis, which is the other topic of this thesis. We consider perturbations in initial value y0y_0 with normwise relative error ε\varepsilon and the relative error in the components of the solution of the equation given by δl(t)\delta_l(t). The interest is to study, for the ll-th component, the conditioning of the problem y0yl(t)=elTetAy0, y_0\mapsto y_l(t)=e_l^Te^{tA}y_0, where elTe_l^T is the ll-th vector of the canonical basis of Rn\mathbb{R}^n. We make this analysis for a diagonalizable matrix AA, diagonalizability being a generic situation for the matrix AA. We give two condition numbers in this part of the thesis and study their asymptotic behavior as time goes to infinity

    Relative Perturbation Theory: I. Eigenvalue and Singular Value Variations

    Full text link
    corecore