472 research outputs found
Fourier PCA and Robust Tensor Decomposition
Fourier PCA is Principal Component Analysis of a matrix obtained from higher
order derivatives of the logarithm of the Fourier transform of a
distribution.We make this method algorithmic by developing a tensor
decomposition method for a pair of tensors sharing the same vectors in rank-
decompositions. Our main application is the first provably polynomial-time
algorithm for underdetermined ICA, i.e., learning an matrix
from observations where is drawn from an unknown product
distribution with arbitrary non-Gaussian components. The number of component
distributions can be arbitrarily higher than the dimension and the
columns of only need to satisfy a natural and efficiently verifiable
nondegeneracy condition. As a second application, we give an alternative
algorithm for learning mixtures of spherical Gaussians with linearly
independent means. These results also hold in the presence of Gaussian noise.Comment: Extensively revised; details added; minor errors corrected;
exposition improve
New Structured Matrix Methods for Real and Complex Polynomial Root-finding
We combine the known methods for univariate polynomial root-finding and for
computations in the Frobenius matrix algebra with our novel techniques to
advance numerical solution of a univariate polynomial equation, and in
particular numerical approximation of the real roots of a polynomial. Our
analysis and experiments show efficiency of the resulting algorithms.Comment: 18 page
Perturbation, extraction and refinement of invariant pairs for matrix polynomials
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under
perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments
with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures
Relative conditioning of linear systems of ODEs with respect to perturbation in the matrix of the system and in the initial value
The thesis is about how perturbations in the initial value or in the coefficient matrix propagate along the solutions of -dimensional linear ordinary differential equations (ODE)
\begin{equation*}
\left\{
\begin{array}{l}
y^\prime(t) =Ay(t),\ t\geq 0,\\
y(0)=y_0,
\end{array}
\right.
\end{equation*}
where and and is the solution of the equation.\\
We begin by considering a perturbation analysis when the initial value is perturbed to with relative error
\varepsilon=\frac{\norm{\tilde{y}_0-y_0}}{\norm{y_0}},
where \norm{\cdot} is a vector norm on . Due to perturbation in the initial value, the solution is perturbed to with relative error
In other words, it is the (relative) conditioning of the problem
\begin{equation*}
y_0\mapsto e^{tA}y_0.
\end{equation*}
The relation between the error and the error is described by three condition numbers namely: the condition number with the direction of perturbation, the condition number independent of the direction of perturbation and the condition number not only independent of the specific direction of perturbation but also independent of the specific initial value. How these condition numbers behave over a long period of time is an important aspect of the study.
In the thesis, we move towards perturbations in the matrix as well as componentwise relative errors, rather than normwise relative errors, for perturbations of the initial value. About the first topic of the thesis, we look over how perturbations propagate along the solution of the ODE, when it is the coefficient matrix rather than the initial value that perturbs.
In other words, the interest is to study the conditioning of the problem
In case when the matrix perturbs to , the relative error is given by
\epsilon=\frac{\vertiii{\tilde{A}-A}}{\vertiii{A}}
and the relative error in the solution of the ODE is given by
We introduce three condition numbers as before. The analysis of the condition numbers is done for a normal matrix and by making use of -norm. We give very useful upper and lower bounds on these three condition numbers and we study their asymptotic behavior as time goes to infinity.
There could be cases when someone is interested in the relative errors
of the perturbed solution components.
With the motivation that componentwise relative errors give more information than the normwise relative error, we make a componentwise relative error analysis, which is the other topic of this thesis.
We consider perturbations in initial value with normwise relative error and the relative error in the components of the solution of the equation given by . The interest is to study, for the -th component, the conditioning of the problem
where is the -th vector of the canonical basis of .
We make this analysis for a diagonalizable matrix , diagonalizability being a generic situation for the matrix . We give two condition numbers in this part of the thesis and study their asymptotic behavior as time goes to infinity.The thesis is about how perturbations in the initial value or in the coefficient matrix propagate along the solutions of -dimensional linear ordinary differential equations (ODE)
\begin{equation*}
\left\{
\begin{array}{l}
y^\prime(t) =Ay(t),\ t\geq 0,\\
y(0)=y_0,
\end{array}
\right.
\end{equation*}
where and and is the solution of the equation.\\
We begin by considering a perturbation analysis when the initial value is perturbed to with relative error
\varepsilon=\frac{\norm{\tilde{y}_0-y_0}}{\norm{y_0}},
where \norm{\cdot} is a vector norm on . Due to perturbation in the initial value, the solution is perturbed to with relative error
In other words, it is the (relative) conditioning of the problem
\begin{equation*}
y_0\mapsto e^{tA}y_0.
\end{equation*}
The relation between the error and the error is described by three condition numbers namely: the condition number with the direction of perturbation, the condition number independent of the direction of perturbation and the condition number not only independent of the specific direction of perturbation but also independent of the specific initial value. How these condition numbers behave over a long period of time is an important aspect of the study.
In the thesis, we move towards perturbations in the matrix as well as componentwise relative errors, rather than normwise relative errors, for perturbations of the initial value. About the first topic of the thesis, we look over how perturbations propagate along the solution of the ODE, when it is the coefficient matrix rather than the initial value that perturbs.
In other words, the interest is to study the conditioning of the problem
In case when the matrix perturbs to , the relative error is given by
\epsilon=\frac{\vertiii{\tilde{A}-A}}{\vertiii{A}}
and the relative error in the solution of the ODE is given by
We introduce three condition numbers as before. The analysis of the condition numbers is done for a normal matrix and by making use of -norm. We give very useful upper and lower bounds on these three condition numbers and we study their asymptotic behavior as time goes to infinity.
There could be cases when someone is interested in the relative errors
of the perturbed solution components.
With the motivation that componentwise relative errors give more information than the normwise relative error, we make a componentwise relative error analysis, which is the other topic of this thesis.
We consider perturbations in initial value with normwise relative error and the relative error in the components of the solution of the equation given by . The interest is to study, for the -th component, the conditioning of the problem
where is the -th vector of the canonical basis of .
We make this analysis for a diagonalizable matrix , diagonalizability being a generic situation for the matrix . We give two condition numbers in this part of the thesis and study their asymptotic behavior as time goes to infinity
- …