500,615 research outputs found

    On maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices

    Full text link
    The problem of finding a k×kk \times k submatrix of maximum volume of a matrix AA is of interest in a variety of applications. For example, it yields a quasi-best low-rank approximation constructed from the rows and columns of AA. We show that such a submatrix can always be chosen to be a principal submatrix if AA is symmetric semidefinite or diagonally dominant. Then we analyze the low-rank approximation error returned by a greedy method for volume maximization, cross approximation with complete pivoting. Our bound for general matrices extends an existing result for symmetric semidefinite matrices and yields new error estimates for diagonally dominant matrices. In particular, for doubly diagonally dominant matrices the error is shown to remain within a modest factor of the best approximation error. We also illustrate how the application of our results to cross approximation for functions leads to new and better convergence results

    Low rank approximation and decomposition of large matrices using error correcting codes

    Full text link
    Low rank approximation is an important tool used in many applications of signal processing and machine learning. Recently, randomized sketching algorithms were proposed to effectively construct low rank approximations and obtain approximate singular value decompositions of large matrices. Similar ideas were used to solve least squares regression problems. In this paper, we show how matrices from error correcting codes can be used to find such low rank approximations and matrix decompositions, and extend the framework to linear least squares regression problems. The benefits of using these code matrices are the following: (i) They are easy to generate and they reduce randomness significantly. (ii) Code matrices with mild properties satisfy the subspace embedding property, and have a better chance of preserving the geometry of an entire subspace of vectors. (iii) For parallel and distributed applications, code matrices have significant advantages over structured random matrices and Gaussian random matrices. (iv) Unlike Fourier or Hadamard transform matrices, which require sampling O(klogk)O(k\log k) columns for a rank-kk approximation, the log factor is not necessary for certain types of code matrices. That is, (1+ϵ)(1+\epsilon) optimal Frobenius norm error can be achieved for a rank-kk approximation with O(k/ϵ)O(k/\epsilon) samples. (v) Fast multiplication is possible with structured code matrices, so fast approximations can be achieved for general dense input matrices. (vi) For least squares regression problem minAxb2\min\|Ax-b\|_2 where ARn×dA\in \mathbb{R}^{n\times d}, the (1+ϵ)(1+\epsilon) relative error approximation can be achieved with O(d/ϵ)O(d/\epsilon) samples, with high probability, when certain code matrices are used

    Hermitian Tensor Product Approximation of Complex Matrices and Separability

    Full text link
    The approximation of matrices to the sum of tensor products of Hermitian matrices is studied. A minimum decomposition of matrices on tensor space H1H2H_1\otimes H_2 in terms of the sum of tensor products of Hermitian matrices on H1H_1 and H2H_2 is presented. From this construction the separability of quantum states is discussed.Comment: 16 page

    Literature survey on low rank approximation of matrices

    Full text link
    Low rank approximation of matrices has been well studied in literature. Singular value decomposition, QR decomposition with column pivoting, rank revealing QR factorization (RRQR), Interpolative decomposition etc are classical deterministic algorithms for low rank approximation. But these techniques are very expensive (O(n3)(O(n^{3}) operations are required for n×nn\times n matrices). There are several randomized algorithms available in the literature which are not so expensive as the classical techniques (but the complexity is not linear in n). So, it is very expensive to construct the low rank approximation of a matrix if the dimension of the matrix is very large. There are alternative techniques like Cross/Skeleton approximation which gives the low-rank approximation with linear complexity in n . In this article we review low rank approximation techniques briefly and give extensive references of many techniques

    Uniform Semiclassical Approximation for the Wigner 6j6j Symbol in Terms of Rotation Matrices

    Full text link
    A new uniform asymptotic approximation for the Wigner 6j6j symbol is given in terms of Wigner rotation matrices (dd-matrices). The approximation is uniform in the sense that it applies for all values of the quantum numbers, even those near caustics. The derivation of the new approximation is not given, but the geometrical ideas supporting it are discussed and numerical tests are presented, including comparisons with the exact 6j6j-symbol and with the Ponzano-Regge approximation.Comment: 44 pages plus 20 figure

    Block Factor-width-two Matrices and Their Applications to Semidefinite and Sum-of-squares Optimization

    Full text link
    Semidefinite and sum-of-squares (SOS) optimization are fundamental computational tools in many areas, including linear and nonlinear systems theory. However, the scale of problems that can be addressed reliably and efficiently is still limited. In this paper, we introduce a new notion of \emph{block factor-width-two matrices} and build a new hierarchy of inner and outer approximations of the cone of positive semidefinite (PSD) matrices. This notion is a block extension of the standard factor-width-two matrices, and allows for an improved inner-approximation of the PSD cone. In the context of SOS optimization, this leads to a block extension of the \emph{scaled diagonally dominant sum-of-squares (SDSOS)} polynomials. By varying a matrix partition, the notion of block factor-width-two matrices can balance a trade-off between the computation scalability and solution quality for solving semidefinite and SOS optimization. Numerical experiments on large-scale instances confirm our theoretical findings.Comment: 26 pages, 5 figures. Added a new section on the approximation quality analysis using block factor-width-two matrices. Code is available through https://github.com/zhengy09/SDPf
    corecore