500,615 research outputs found
On maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices
The problem of finding a submatrix of maximum volume of a matrix
is of interest in a variety of applications. For example, it yields a
quasi-best low-rank approximation constructed from the rows and columns of .
We show that such a submatrix can always be chosen to be a principal submatrix
if is symmetric semidefinite or diagonally dominant. Then we analyze the
low-rank approximation error returned by a greedy method for volume
maximization, cross approximation with complete pivoting. Our bound for general
matrices extends an existing result for symmetric semidefinite matrices and
yields new error estimates for diagonally dominant matrices. In particular, for
doubly diagonally dominant matrices the error is shown to remain within a
modest factor of the best approximation error. We also illustrate how the
application of our results to cross approximation for functions leads to new
and better convergence results
Low rank approximation and decomposition of large matrices using error correcting codes
Low rank approximation is an important tool used in many applications of
signal processing and machine learning. Recently, randomized sketching
algorithms were proposed to effectively construct low rank approximations and
obtain approximate singular value decompositions of large matrices. Similar
ideas were used to solve least squares regression problems. In this paper, we
show how matrices from error correcting codes can be used to find such low rank
approximations and matrix decompositions, and extend the framework to linear
least squares regression problems. The benefits of using these code matrices
are the following: (i) They are easy to generate and they reduce randomness
significantly. (ii) Code matrices with mild properties satisfy the subspace
embedding property, and have a better chance of preserving the geometry of an
entire subspace of vectors. (iii) For parallel and distributed applications,
code matrices have significant advantages over structured random matrices and
Gaussian random matrices. (iv) Unlike Fourier or Hadamard transform matrices,
which require sampling columns for a rank- approximation, the
log factor is not necessary for certain types of code matrices. That is,
optimal Frobenius norm error can be achieved for a rank-
approximation with samples. (v) Fast multiplication is possible
with structured code matrices, so fast approximations can be achieved for
general dense input matrices. (vi) For least squares regression problem
where , the
relative error approximation can be achieved with samples, with
high probability, when certain code matrices are used
Hermitian Tensor Product Approximation of Complex Matrices and Separability
The approximation of matrices to the sum of tensor products of Hermitian
matrices is studied. A minimum decomposition of matrices on tensor space
in terms of the sum of tensor products of Hermitian matrices
on and is presented. From this construction the separability of
quantum states is discussed.Comment: 16 page
Literature survey on low rank approximation of matrices
Low rank approximation of matrices has been well studied in literature.
Singular value decomposition, QR decomposition with column pivoting, rank
revealing QR factorization (RRQR), Interpolative decomposition etc are
classical deterministic algorithms for low rank approximation. But these
techniques are very expensive operations are required for matrices). There are several randomized algorithms available in the
literature which are not so expensive as the classical techniques (but the
complexity is not linear in n). So, it is very expensive to construct the low
rank approximation of a matrix if the dimension of the matrix is very large.
There are alternative techniques like Cross/Skeleton approximation which gives
the low-rank approximation with linear complexity in n . In this article we
review low rank approximation techniques briefly and give extensive references
of many techniques
Uniform Semiclassical Approximation for the Wigner Symbol in Terms of Rotation Matrices
A new uniform asymptotic approximation for the Wigner symbol is given in
terms of Wigner rotation matrices (-matrices). The approximation is uniform
in the sense that it applies for all values of the quantum numbers, even those
near caustics. The derivation of the new approximation is not given, but the
geometrical ideas supporting it are discussed and numerical tests are
presented, including comparisons with the exact -symbol and with the
Ponzano-Regge approximation.Comment: 44 pages plus 20 figure
Block Factor-width-two Matrices and Their Applications to Semidefinite and Sum-of-squares Optimization
Semidefinite and sum-of-squares (SOS) optimization are fundamental
computational tools in many areas, including linear and nonlinear systems
theory. However, the scale of problems that can be addressed reliably and
efficiently is still limited. In this paper, we introduce a new notion of
\emph{block factor-width-two matrices} and build a new hierarchy of inner and
outer approximations of the cone of positive semidefinite (PSD) matrices. This
notion is a block extension of the standard factor-width-two matrices, and
allows for an improved inner-approximation of the PSD cone. In the context of
SOS optimization, this leads to a block extension of the \emph{scaled
diagonally dominant sum-of-squares (SDSOS)} polynomials. By varying a matrix
partition, the notion of block factor-width-two matrices can balance a
trade-off between the computation scalability and solution quality for solving
semidefinite and SOS optimization. Numerical experiments on large-scale
instances confirm our theoretical findings.Comment: 26 pages, 5 figures. Added a new section on the approximation quality
analysis using block factor-width-two matrices. Code is available through
https://github.com/zhengy09/SDPf
- …
