83 research outputs found
Approximate matrix and tensor diagonalization by unitary transformations: convergence of Jacobi-type algorithms
We propose a gradient-based Jacobi algorithm for a class of maximization
problems on the unitary group, with a focus on approximate diagonalization of
complex matrices and tensors by unitary transformations. We provide weak
convergence results, and prove local linear convergence of this algorithm.The
convergence results also apply to the case of real-valued tensors
On approximate diagonalization of third order symmetric tensors by orthogonal transformations
In this paper, we study the approximate orthogonal diagonalization problem of
third order symmetric tensors. We define several classes of approximately
diagonal tensors, including the ones corresponding to the stationary points of
this problem. We study the relationships between these classes, and other
well-known objects, such as tensor Z-eigenvalue and Z-eigenvector. We also
prove results on convergence of the cyclic Jacobi (or Jacobi CoM2) algorithm.Comment: 24 page
Jacobi-type algorithm for low rank orthogonal approximation of symmetric tensors and its convergence analysis
In this paper, we propose a Jacobi-type algorithm to solve the low rank
orthogonal approximation problem of symmetric tensors. This algorithm includes
as a special case the well-known Jacobi CoM2 algorithm for the approximate
orthogonal diagonalization problem of symmetric tensors. We first prove the
weak convergence of this algorithm, \textit{i.e.} any accumulation point is a
stationary point. Then we study the global convergence of this algorithm under
a gradient based ordering for a special case: the best rank-2 orthogonal
approximation of 3rd order symmetric tensors, and prove that an accumulation
point is the unique limit point under some conditions. Numerical experiments
are presented to show the efficiency of this algorithm.Comment: 19 pages, 4 figure
Randomized Joint Diagonalization of Symmetric Matrices
Given a family of nearly commuting symmetric matrices, we consider the task
of computing an orthogonal matrix that nearly diagonalizes every matrix in the
family. In this paper, we propose and analyze randomized joint diagonalization
(RJD) for performing this task. RJD applies a standard eigenvalue solver to
random linear combinations of the matrices. Unlike existing optimization-based
methods, RJD is simple to implement and leverages existing high-quality linear
algebra software packages. Our main novel contribution is to prove robust
recovery: Given a family that is -near to a commuting family, RJD
jointly diagonalizes this family, with high probability, up to an error of norm
O(). No other existing method is known to enjoy such a universal
robust recovery guarantee. We also discuss how the algorithm can be further
improved by deflation techniques and demonstrate its state-of-the-art
performance by numerical experiments with synthetic and real-world data
Algorithmes pour la diagonalisation conjointe de tenseurs sans contrainte unitaire. Application à la séparation MIMO de sources de télécommunications numériques
This thesis develops joint diagonalization of matrices and third-order tensors methods for MIMO source separation in the field of digital telecommunications. After a state of the art, the motivations and the objectives are presented. Then the joint diagonalisation and the blind source separation issues are defined and a link between both fields is established. Thereafter, five Jacobi-like iterative algorithms based on an LU parameterization are developed. For each of them, we propose to derive the diagonalization matrix by optimizing an inverse criterion. Two ways are investigated : minimizing the criterion in a direct way or assuming that the elements from the considered set are almost diagonal. Regarding the parameters derivation, two strategies are implemented : one consists in estimating each parameter independently, the other consists in the independent derivation of couple of well-chosen parameters. Hence, we propose three algorithms for the joint diagonalization of symmetric complex matrices or hermitian ones. The first one relies on searching for the roots of the criterion derivative, the second one relies on a minor eigenvector research and the last one relies on a gradient descent method enhanced by computation of the optimal adaptation step. In the framework of joint diagonalization of symmetric, INDSCAL or non symmetric third-order tensors, we have developed two algorithms. For each of them, the parameters derivation is done by computing the roots of the considered criterion derivative. We also show the link between the joint diagonalization of a third-order tensor set and the canonical polyadic decomposition of a fourth-order tensor. We confront both methods through numerical simulations. The good behavior of the proposed algorithms is illustrated by means of computing simulations. Finally, they are applied to the source separation of digital telecommunication signals.Cette thèse développe des méthodes de diagonalisation conjointe de matrices et de tenseurs d’ordre trois, et son application à la séparation MIMO de sources de télécommunications numériques. Après un état, les motivations et objectifs de la thèse sont présentés. Les problèmes de la diagonalisation conjointe et de la séparation de sources sont définis et un lien entre ces deux domaines est établi. Par la suite, plusieurs algorithmes itératifs de type Jacobi reposant sur une paramétrisation LU sont développés. Pour chacun des algorithmes, on propose de déterminer les matrices permettant de diagonaliser l’ensemble considéré par l’optimisation d’un critère inverse. On envisage la minimisation du critère selon deux approches : la première, de manière directe, et la seconde, en supposant que les éléments de l’ensemble considéré sont quasiment diagonaux. En ce qui concerne l’estimation des différents paramètres du problème, deux stratégies sont mises en œuvre : l’une consistant à estimer tous les paramètres indépendamment et l’autre reposant sur l’estimation indépendante de couples de paramètres spécifiquement choisis. Ainsi, nous proposons trois algorithmes pour la diagonalisation conjointe de matrices complexes symétriques ou hermitiennes et deux algorithmes pour la diagonalisation conjointe d’ensembles de tenseurs symétriques ou non-symétriques ou admettant une décomposition INDSCAL. Nous montrons aussi le lien existant entre la diagonalisation conjointe de tenseurs d’ordre trois et la décomposition canonique polyadique d’un tenseur d’ordre quatre, puis nous comparons les algorithmes développés à différentes méthodes de la littérature. Le bon comportement des algorithmes proposés est illustré au moyen de simulations numériques. Puis, ils sont validés dans le cadre de la séparation de sources de télécommunications numériques
Lectures on Mechanics
Publisher's description: The use of geometric methods in classical mechanics has proven fruitful, with wide applications in physics and engineering. In this book, Professor Marsden concentrates on these geometric aspects, especially on symmetry techniques. The main points he covers are: the stability of relative equilibria, which is analyzed using the block diagonalization technique; geometric phases, studied using the reduction and reconstruction technique; and bifurcation of relative equilibria and chaos in mechanical systems. A unifying theme for these points is provided by reduction theory, the associated mechanical connection and techniques from dynamical systems. These methods can be applied to many control and stabilization situations, and this is illustrated using rigid bodies with internal rotors, and the use of geometric phases in mechanical systems. To illustrate the above ideas and the power of geometric arguments, the author studies a variety of specific systems, including the double spherical pendulum and the classical rotating water molecule
Recommended from our members
Factor Analysis of Data Matrices: New Theoretical and Computational Aspects With Applications
The classical fitting problem in exploratory factor analysis (EFA) is to find estimates for the factor loadings matrix and the matrix of unique factor variances which give the best fit to the sample covariance or correlation matrix with respect to some goodness-of-fit criterion. Predicted factor scores can be obtained as a function of these estimates and the data. In this thesis, the EFA model is considered as a specific data matrix decomposition with fixed unknown matrix parameters. Fitting the EFA model directly to the data yields simultaneous solutions for both loadings and factor scores. Several new algorithms are introduced for the least squares and weighted least squares estimation of all EFA model unknowns. The numerical procedures are based on the singular value decomposition, facilitate the estimation of both common and unique factor scores, and work equally well when the number of variables exceeds the number of available observations.
Like EFA, noisy independent component analysis (ICA) is a technique for reduction of the data dimensionality in which the interrelationships among the observed variables are explained in terms of a much smaller number of latent factors. The key difference between EFA and noisy ICA is that in the latter model the common factors are assumed to be both independent and non-normal. In contrast to EFA, there is no rotational indeterminacy in noisy ICA. In this thesis, noisy ICA is viewed as a method of factor rotation in EFA. Starting from an initial EFA solution, an orthogonal rotation matrix is sought that minimizes the dependence between the common factors. The idea of rotating the scores towards independence is also employed in three-mode factor analysis to analyze data sets having a three-way structure.
The new theoretical and computational aspects contained in this thesis are illustrated by means of several examples with real and artificial data
- …