10 research outputs found
The SVD-Fundamental Theorem of Linear Algebra
Given an m Ă n matrix A, with m â„ n, the four subspaces associated with it are shown in Fig. 1 (see [1]).
Fig. 1. The row spaces and the nullspaces of A and AT; a1 through an and h1 through hm are abbreviations of the alignerframe and hangerframe vectors respectively (see [2]).
The Fundamental Theorem of Linear Algebra tells us that N(A) is the orthogonal complement of R(AT). These four subspaces tell the whole story of the Linear System Ax = y. So, for example, the absence of N(AT) indicates that a solution always exists, whereas the absence of N(A) indicates that this solution is unique. Given the importance of these subspaces, computing bases for them is the gist of Linear Algebra. In âClassicalâ Linear Algebra, bases for these subspaces are computed using Gaussian Elimination; they are orthonormalized with the help of the Gram-Schmidt method. Continuing our previous work [3] and following Uhlâs excellent approach [2] we use SVD analysis to compute orthonormal bases for the four subspaces associated with A, and give a 3D explanation. We then state and prove what we call the âSVD-Fundamental Theoremâ of Linear Algebra, and apply it in solving systems of linear equations
Computing the Rank Profile Matrix
The row (resp. column) rank profile of a matrix describes the staircase shape
of its row (resp. column) echelon form. In an ISSAC'13 paper, we proposed a
recursive Gaussian elimination that can compute simultaneously the row and
column rank profiles of a matrix as well as those of all of its leading
sub-matrices, in the same time as state of the art Gaussian elimination
algorithms. Here we first study the conditions making a Gaus-sian elimination
algorithm reveal this information. Therefore, we propose the definition of a
new matrix invariant, the rank profile matrix, summarizing all information on
the row and column rank profiles of all the leading sub-matrices. We also
explore the conditions for a Gaussian elimination algorithm to compute all or
part of this invariant, through the corresponding PLUQ decomposition. As a
consequence, we show that the classical iterative CUP decomposition algorithm
can actually be adapted to compute the rank profile matrix. Used, in a Crout
variant, as a base-case to our ISSAC'13 implementation, it delivers a
significant improvement in efficiency. Second, the row (resp. column) echelon
form of a matrix are usually computed via different dedicated triangular
decompositions. We show here that, from some PLUQ decompositions, it is
possible to recover the row and column echelon forms of a matrix and of any of
its leading sub-matrices thanks to an elementary post-processing algorithm
A Basic Result on the Theory of Subresultants
Given the polynomials f, g â Z[x] the main result of our paper,
Theorem 1, establishes a direct one-to-one correspondence between the
modified Euclidean and Euclidean polynomial remainder sequences (prsâs) of f, g
computed in Q[x], on one hand, and the subresultant prs of f, g computed
by determinant evaluations in Z[x], on the other.
An important consequence of our theorem is that the signs of Euclidean
and modified Euclidean prsâs - computed either in Q[x] or in Z[x] -
are uniquely determined by the corresponding signs of the subresultant prsâs.
In this respect, all prsâs are uniquely "signed".
Our result fills a gap in the theory of subresultant prsâs. In order to place
Theorem 1 into its correct historical perspective we present a brief historical
review of the subject and hint at certain aspects that need - according to
our opinion - to be revised.
ACM Computing Classification System (1998): F.2.1, G.1.5, I.1.2
Subresultant Polynomial Remainder Sequences Obtained by Polynomial Divisions in Q[x] or in Z[x]
In this paper we present two new methods for computing the
subresultant polynomial remainder sequence (prs) of two polynomials f, g â Z[x].
We are now able to also correctly compute the Euclidean and modified
Euclidean prs of f, g by using either of the functions employed by our
methods to compute the remainder polynomials.
Another innovation is that we are able to obtain subresultant prsâs in
Z[x] by employing the function rem(f, g, x) to compute the remainder
polynomials in [x]. This is achieved by our method subresultants_amv_q
(f, g, x), which is somewhat slow due to the inherent higher cost of com-
putations in the field of rationals.
To improve in speed, our second method, subresultants_amv(f, g,
x), computes the remainder polynomials in the ring Z[x] by employing the
function rem_z(f, g, x); the time complexity and performance of this
method are very competitive.
Our methods are two different implementations of Theorem 1 (Section 3),
which establishes a one-to-one correspondence between the Euclidean and
modified Euclidean prs of f, g, on one hand, and the subresultant prs of f, g,
on the other.
By contrast, if â as is currently the practice â the remainder polynomi-
als are obtained by the pseudo-remainders function prem(f, g, x) 3 , then
only subresultant prsâs are correctly computed. Euclidean and modified Eu-
clidean prsâs generated by this function may cause confusion with the signs
and conflict with Theorem 1.
ACM Computing Classification System (1998): F.2.1, G.1.5, I.1.2
Higher analogues of the discrete-time Toda equation and the quotient-difference algorithm
The discrete-time Toda equation arises as a universal equation for the
relevant Hankel determinants associated with one-variable orthogonal
polynomials through the mechanism of adjacency, which amounts to the inclusion
of shifted weight functions in the orthogonality condition. In this paper we
extend this mechanism to a new class of two-variable orthogonal polynomials
where the variables are related via an elliptic curve. This leads to a `Higher
order Analogue of the Discrete-time Toda' (HADT) equation for the associated
Hankel determinants, together with its Lax pair, which is derived from the
relevant recurrence relations for the orthogonal polynomials. In a similar way
as the quotient-difference (QD) algorithm is related to the discrete-time Toda
equation, a novel quotient-quotient-difference (QQD) scheme is presented for
the HADT equation. We show that for both the HADT equation and the QQD scheme,
there exists well-posed -periodic initial value problems, for almost all
\s\in\Z^2. From the Lax-pairs we furthermore derive invariants for
corresponding reductions to dynamical mappings for some explicit examples.Comment: 38 page
Computing with quasiseparable matrices
International audienceThe class of quasiseparable matrices is defined by a pair of bounds, called the quasiseparable orders, on the ranks of the maximal sub-matrices entirely located in their strictly lower and upper triangular parts. These arise naturally in applications, as e.g. the inverse of band matrices, and are widely used for they admit structured representations allowing to compute with them in time linear in the dimension and quadratic with the quasiseparable order. We show, in this paper, the connection between the notion of quasisepa-rability and the rank profile matrix invariant, presented in [Dumas & al. ISSAC'15]. This allows us to propose an algorithm computing the quasiseparable orders (rL, rU) in time O(n^2 s^(Ïâ2)) where s = max(rL, rU) and Ï the exponent of matrix multiplication. We then present two new structured representations, a binary tree of PLUQ decompositions, and the Bruhat generator, using respectively O(ns log n/s) and O(ns) field elements instead of O(ns^2) for the previously known generators. We present algorithms computing these representations in time O(n^2 s^(Ïâ2)). These representations allow a matrix-vector product in time linear in the size of their representation. Lastly we show how to multiply two such structured matrices in time O(n^2 s^(Ïâ2))
A Practical Approach to the Secure Computation of the Moore-Penrose Pseudoinverse over the Rationals
Solving linear systems of equations is a universal problem. In the context of secure multiparty computation (MPC), a method to solve such systems, especially for the case in which the rank of the system is unknown and should remain private, is an important building block.
We devise an efficient and data-oblivious algorithm (meaning that the algorithm\u27s execution time and branching behavior are independent of all secrets) for solving a bounded integral linear system of unknown rank over the rational numbers via the Moore-Penrose pseudoinverse, using finite-field arithmetic. I.e., we compute the Moore-Penrose inverse over a finite field of sufficiently large order, so that we can recover the rational solution from the solution over the finite field.
While we have designed the algorithm with an MPC context in mind, it could be valuable also in other contexts where data-obliviousness is required, like secure enclaves in CPUs.
Previous work by Cramer, Kiltz and PadrĂł (CRYPTO 2007) proposes a constant-rounds protocol for computing the Moore-Penrose pseudoinverse over a finite field. The asymptotic complexity (counted as the number of secure multiplications) of their solution is , where and , , are the dimensions of the linear system. To reduce the number of secure multiplications, we sacrifice the constant-rounds property and propose a protocol for computing the Moore-Penrose pseudoinverse over the rational numbers in a linear number of rounds, requiring only secure multiplications.
To obtain the common denominator of the pseudoinverse, required for constructing an integer-representation of the pseudoinverse, we generalize a result by Ben-Israel for computing the squared volume of a matrix. Also, we show how to precondition a symmetric matrix to achieve generic rank profile while preserving symmetry and being able to remove the preconditioner after it has served its purpose. These results may be of independent interest
Applications of singular-value decomposition (SVD)
Let A be an m x n matrix with m greater than or equal to n. Then one form of the singular-value decomposition of A is A = U-T SigmaV, where U and V are orthogonal and Sigma is square diagonal. That is, UUT = I-rank(A), VVT = I-rank(A), U is rank(A) x m, V is rank(A) x n and [GRAPHICS] is a rank (A) x rank(A) diagonal matrix. In addition sigma(1) greater than or equal to sigma(2) greater than or equal to... greater than or equal to sigma(rank)(A) > 0. The sigma(i)'s are called the singular values of A and their number is equal to the rank of A. The ratio sigma(1) /sigma(rank)(A) can be regarded as a condition number of the matrix A. It is easily verified that the singular-value decomposition can be also written as [GRAPHICS] The matrix u(i)(T) v(i) is the outerproduct of the i-th row of U with the corresponding row of V. Note that each of these matrices can be stored using only m + n locations rather than mn locations
Computations in modules over commutative domains
This paper is a review of results on computational methods of linear algebra over commutative domains. Methods for the following problems are examined: solution of systems of linear equations, computation of determinants, computation of adjoint and inverse matrices, computation of the characteristic polynomial of a matrix. © Springer-Verlag Berlin Heidelberg 2007