59,391 research outputs found

    Quantum-inspired sublinear classical algorithms for solving low-rank linear systems

    Full text link
    We present classical sublinear-time algorithms for solving low-rank linear systems of equations. Our algorithms are inspired by the HHL quantum algorithm for solving linear systems and the recent breakthrough by Tang of dequantizing the quantum algorithm for recommendation systems. Let A∈Cm×nA \in \mathbb{C}^{m \times n} be a rank-kk matrix, and b∈Cmb \in \mathbb{C}^m be a vector. We present two algorithms: a "sampling" algorithm that provides a sample from A−1bA^{-1}b and a "query" algorithm that outputs an estimate of an entry of A−1bA^{-1}b, where A−1A^{-1} denotes the Moore-Penrose pseudo-inverse. Both of our algorithms have query and time complexity O(poly(k,κ,∥A∥F,1/ϵ) polylog(m,n))O(\mathrm{poly}(k, \kappa, \|A\|_F, 1/\epsilon)\,\mathrm{polylog}(m, n)), where κ\kappa is the condition number of AA and ϵ\epsilon is the precision parameter. Note that the algorithms we consider are sublinear time, so they cannot write and read the whole matrix or vectors. In this paper, we assume that AA and bb come with well-known low-overhead data structures such that entries of AA and bb can be sampled according to some natural probability distributions. Alternatively, when AA is positive semidefinite, our algorithms can be adapted so that the sampling assumption on bb is not required

    Resolvent sampling based Rayleigh-Ritz method for large-scale nonlinear eigenvalue problems

    Full text link
    A new algorithm, denoted by RSRR, is presented for solving large-scale nonlinear eigenvalue problems (NEPs) with a focus on improving the robustness and reliability of the solution, which is a challenging task in computational science and engineering. The proposed algorithm utilizes the Rayleigh-Ritz procedure to compute all eigenvalues and the corresponding eigenvectors lying within a given contour in the complex plane. The main novelties are the following. First and foremost, the approximate eigenspace is constructed by using the values of the resolvent at a series of sampling points on the contour, which effectively circumvented the unreliability of previous schemes that using high-order contour moments of the resolvent. Secondly, an improved Sakurai-Sugiura algorithm is proposed to solve the projected NEPs with enhancements on reliability and accuracy. The user-defined probing matrix in the original algorithm is avoided and the number of eigenvalues is determined automatically by provided strategies. Finally, by approximating the projected matrices with the Chebyshev interpolation technique, RSRR is further extended to solve NEPs in the boundary element method, which is typically difficult due to the densely populated matrices and high computational costs. The good performance of RSRR is demonstrated by a variety of benchmark examples and large-scale practical applications, with the degrees of freedom ranging from several hundred up to around one million. The algorithm is suitable for parallelization and easy to implement in conjunction with other programs and software.Comment: 26 pages, 14 figures, 3 tables. comments and discussion to: [email protected]

    Multistep matrix splitting iteration preconditioning for singular linear systems

    Full text link
    Multistep matrix splitting iterations serve as preconditioning for Krylov subspace methods for solving singular linear systems. The preconditioner is applied to the generalized minimal residual (GMRES) method and the flexible GMRES (FGMRES) method. We present theoretical and practical justifications for using this approach. Numerical experiments show that the multistep generalized shifted splitting (GSS) and Hermitian and skew-Hermitian splitting (HSS) iteration preconditioning are more robust and efficient compared to standard preconditioners for some test problems of large sparse singular linear systems.Comment: 16 page

    Quantum Circuit Design Methodology for Multiple Linear Regression

    Full text link
    Multiple linear regression assumes an imperative role in supervised machine learning. In 2009, Harrow et al. [Phys. Rev. Lett. 103, 150502 (2009)] showed that their HHL algorithm can be used to sample the solution of a linear system Ax=b\mathbf{Ax=b} exponentially faster than any existing classical algorithm, with some manageable caveats. The entire field of quantum machine learning gained considerable traction after the discovery of this celebrated algorithm. However, effective practical applications and experimental implementations of HHL are still sparse in the literature. Here, we demonstrate a potential practical utility of HHL, in the context of regression analysis, using the remarkable fact that there exists a natural reduction of any multiple linear regression problem to an equivalent linear systems problem. We put forward a 77-qubit quantum circuit design, motivated from an earlier work by Cao et al. [Mol. Phys. 110, 1675 (2012)], to solve a 33-variable regression problem, using only elementary quantum gates. We also implement the Group Leaders Optimization Algorithm (GLOA) [Mol. Phys. 109 (5), 761 (2011)] and elaborate on the advantages of using such stochastic algorithms in creating low-cost circuit approximations for the Hamiltonian simulation. We believe that this application of GLOA and similar stochastic algorithms in circuit approximation will boost time- and cost-efficient circuit designing for various quantum machine learning protocols. Further, we discuss our Qiskit simulation and explore certain generalizations to the circuit design.Comment: 14 pages, 7 figure

    iSIRA: Integrated Shift-Invert Residual Arnoldi Method for Graph Laplacian Matrices from Big Data

    Full text link
    The eigenvalue problem of a graph Laplacian matrix LL arising from a simple, connected and undirected graph has been given more attention due to its extensive applications, such as spectral clustering, community detection, complex network, image processing and so on. The associated graph Laplacian matrix is symmetric, positive semi-definite, and is usually large and sparse. Computing some smallest positive eigenvalues and corresponding eigenvectors is often of interest. However, the singularity of LL makes the classical eigensolvers inefficient since we need to factorize LL for the purpose of solving large and sparse linear systems exactly. The next difficulty is that it is usually time consuming or even unavailable to factorize a large and sparse matrix arising from real network problems from big data such as social media transactional databases, and sensor systems because there is in general not only local connections. In this paper, we propose an eignsolver based on the inexact residual Arnoldi method together with an implicit remedy of the singularity and an effective deflation for convergent eigenvalues. Numerical experiments reveal that the integrated eigensolver outperforms the classical Arnoldi/Lanczos method for computing some smallest positive eigeninformation provided the LU factorization is not available

    A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active Set Identification Scheme

    Full text link
    A constraint-reduced Mehrotra-Predictor-Corrector algorithm for convex quadratic programming is proposed. (At each iteration, such algorithms use only a subset of the inequality constraints in constructing the search direction, resulting in CPU savings.) The proposed algorithm makes use of a regularization scheme to cater to cases where the reduced constraint matrix is rank deficient. Global and local convergence properties are established under arbitrary working-set selection rules subject to satisfaction of a general condition. A modified active-set identification scheme that fulfills this condition is introduced. Numerical tests show great promise for the proposed algorithm, in particular for its active-set identification scheme. While the focus of the present paper is on dense systems, application of the main ideas to large sparse systems is briefly discussed

    On GMRES for singular EP and GP systems

    Full text link
    In this contribution, we study the numerical behavior of the Generalized Minimal Residual (GMRES) method for solving singular linear systems. It is known that GMRES determines a least squares solution without breakdown if the coefficient matrix is range-symmetric (EP), or if its range and nullspace are disjoint (GP) and the system is consistent. We show that the accuracy of GMRES iterates may deteriorate in practice due to three distinct factors: (i) the inconsistency of the linear system; (ii) the distance of the initial residual to the nullspace of the coefficient matrix; (iii) the extremal principal angles between the ranges of the coefficient matrix and its transpose. These factors lead to poor conditioning of the extended Hessenberg matrix in the Arnoldi decomposition and affect the accuracy of the computed least squares solution. We also compare GMRES with the range restricted GMRES (RR-GMRES) method. Numerical experiments show typical behaviors of GMRES for small problems with EP and GP matrices.Comment: 16 pages, 18 figure

    Solving Splitted Multi-Commodity Flow Problem by Efficient Linear Programming Algorithm

    Full text link
    Column generation is often used to solve multi-commodity flow problems. A program for column generation always includes a module that solves a linear equation. In this paper, we address three major issues in solving linear problem during column generation procedure which are (1) how to employ the sparse property of the coefficient matrix; (2) how to reduce the size of the coefficient matrix; and (3) how to reuse the solution to a similar equation. To this end, we first analyze the sparse property of coefficient matrix of linear equations and find that the matrices occurring in iteration are very sparse. Then, we present an algorithm locSolver (for localized system solver) for linear equations with sparse coefficient matrices and right-hand-sides. This algorithm can reduce the number of variables. After that, we present the algorithm incSolver (for incremental system solver) which utilizes similarity in the iterations of the program for a linear equation system. All three techniques can be used in column generation of multi-commodity problems. Preliminary numerical experiments show that the incSolver is significantly faster than the existing algorithms. For example, random test cases show that incSolver is at least 37 times and up to 341 times faster than popular solver LAPACK.Comment: 27 page

    Quantum Regularized Least Squares Solver with Parameter Estimate

    Full text link
    In this paper we propose a quantum algorithm to determine the Tikhonov regularization parameter and solve the ill-conditioned linear equations, for example, arising from the finite element discretization of linear or nonlinear inverse problems. For regularized least squares problem with a fixed regularization parameter, we use the HHL algorithm and work on an extended matrix with smaller condition number. For the determination of the regularization parameter, we combine the classical L-curve and GCV function, and design quantum algorithms to compute the norms of regularized solution and the corresponding residual in parallel and locate the best regularization parameter by Grover's search. The quantum algorithm can achieve a quadratic speedup in the number of regularization parameters and an exponential speedup in the dimension of problem size

    Broyden's method for nonlinear eigenproblems

    Full text link
    Broyden's method is a general method commonly used for nonlinear systems of equations, when very little information is available about the problem. We develop an approach based on Broyden's method for nonlinear eigenvalue problems. Our approach is designed for problems where the evaluation of a matrix vector product is computationally expensive, essentially as expensive as solving the corresponding linear system of equations. We show how the structure of the Jacobian matrix can be incorporated into the algorithm to improve convergence. The algorithm exhibits local superlinear convergence for simple eigenvalues, and we characterize the convergence. We show how deflation can be integrated and combined such that the method can be used to compute several eigenvalues. A specific problem in machine tool milling, coupled with a PDE is used to illustrate the approach. The simulations are done in the julia programming language, and are provided as publicly available module for reproducability
    • …
    corecore