83,212 research outputs found

    Gradient based iterative solutions for general linear matrix equations

    Get PDF
    AbstractIn this paper, we present a gradient based iterative algorithm for solving general linear matrix equations by extending the Jacobi iteration and by applying the hierarchical identification principle. Convergence analysis indicates that the iterative solutions always converge fast to the exact solutions for any initial values and small condition numbers of the associated matrices. Two numerical examples are provided to show that the proposed algorithm is effective

    Quantum annealing for systems of polynomial equations

    Full text link
    Numerous scientific and engineering applications require numerically solving systems of equations. Classically solving a general set of polynomial equations requires iterative solvers, while linear equations may be solved either by direct matrix inversion or iteratively with judicious preconditioning. However, the convergence of iterative algorithms is highly variable and depends, in part, on the condition number. We present a direct method for solving general systems of polynomial equations based on quantum annealing, and we validate this method using a system of second-order polynomial equations solved on a commercially available quantum annealer. We then demonstrate applications for linear regression, and discuss in more detail the scaling behavior for general systems of linear equations with respect to problem size, condition number, and search precision. Finally, we define an iterative annealing process and demonstrate its efficacy in solving a linear system to a tolerance of 10−810^{-8}.Comment: 11 pages, 4 figures. Added example for a system of quadratic equations. Supporting code is available at https://github.com/cchang5/quantum_poly_solver . This is a post-peer-review, pre-copyedit version of an article published in Scientific Reports. The final authenticated version is available online at: https://www.nature.com/articles/s41598-019-46729-

    A conjugate gradient algorithm for the astrometric core solution of Gaia

    Full text link
    The ESA space astrometry mission Gaia, planned to be launched in 2013, has been designed to make angular measurements on a global scale with micro-arcsecond accuracy. A key component of the data processing for Gaia is the astrometric core solution, which must implement an efficient and accurate numerical algorithm to solve the resulting, extremely large least-squares problem. The Astrometric Global Iterative Solution (AGIS) is a framework that allows to implement a range of different iterative solution schemes suitable for a scanning astrometric satellite. In order to find a computationally efficient and numerically accurate iteration scheme for the astrometric solution, compatible with the AGIS framework, we study an adaptation of the classical conjugate gradient (CG) algorithm, and compare it to the so-called simple iteration (SI) scheme that was previously known to converge for this problem, although very slowly. The different schemes are implemented within a software test bed for AGIS known as AGISLab, which allows to define, simulate and study scaled astrometric core solutions. After successful testing in AGISLab, the CG scheme has been implemented also in AGIS. The two algorithms CG and SI eventually converge to identical solutions, to within the numerical noise (of the order of 0.00001 micro-arcsec). These solutions are independent of the starting values (initial star catalogue), and we conclude that they are equivalent to a rigorous least-squares estimation of the astrometric parameters. The CG scheme converges up to a factor four faster than SI in the tested cases, and in particular spatially correlated truncation errors are much more efficiently damped out with the CG scheme.Comment: 24 pages, 16 figures. Accepted for publication in Astronomy & Astrophysic

    Greedy low-rank algorithm for spatial connectome regression

    Get PDF
    Recovering brain connectivity from tract tracing data is an important computational problem in the neurosciences. Mesoscopic connectome reconstruction was previously formulated as a structured matrix regression problem (Harris et al., 2016), but existing techniques do not scale to the whole-brain setting. The corresponding matrix equation is challenging to solve due to large scale, ill-conditioning, and a general form that lacks a convergent splitting. We propose a greedy low-rank algorithm for connectome reconstruction problem in very high dimensions. The algorithm approximates the solution by a sequence of rank-one updates which exploit the sparse and positive definite problem structure. This algorithm was described previously (Kressner and Sirkovi\'c, 2015) but never implemented for this connectome problem, leading to a number of challenges. We have had to design judicious stopping criteria and employ efficient solvers for the three main sub-problems of the algorithm, including an efficient GPU implementation that alleviates the main bottleneck for large datasets. The performance of the method is evaluated on three examples: an artificial "toy" dataset and two whole-cortex instances using data from the Allen Mouse Brain Connectivity Atlas. We find that the method is significantly faster than previous methods and that moderate ranks offer good approximation. This speedup allows for the estimation of increasingly large-scale connectomes across taxa as these data become available from tracing experiments. The data and code are available online

    On Algorithms Based on Joint Estimation of Currents and Contrast in Microwave Tomography

    Full text link
    This paper deals with improvements to the contrast source inversion method which is widely used in microwave tomography. First, the method is reviewed and weaknesses of both the criterion form and the optimization strategy are underlined. Then, two new algorithms are proposed. Both of them are based on the same criterion, similar but more robust than the one used in contrast source inversion. The first technique keeps the main characteristics of the contrast source inversion optimization scheme but is based on a better exploitation of the conjugate gradient algorithm. The second technique is based on a preconditioned conjugate gradient algorithm and performs simultaneous updates of sets of unknowns that are normally processed sequentially. Both techniques are shown to be more efficient than original contrast source inversion.Comment: 12 pages, 12 figures, 5 table

    Fast iterative solvers for convection-diffusion control problems

    Get PDF
    In this manuscript, we describe effective solvers for the optimal control of stabilized convection-diffusion problems. We employ the local projection stabilization, which we show to give the same matrix system whether the discretize-then-optimize or optimize-then-discretize approach for this problem is used. We then derive two effective preconditioners for this problem, the �first to be used with MINRES and the second to be used with the Bramble-Pasciak Conjugate Gradient method. The key components of both preconditioners are an accurate mass matrix approximation, a good approximation of the Schur complement, and an appropriate multigrid process to enact this latter approximation. We present numerical results to demonstrate that these preconditioners result in convergence in a small number of iterations, which is robust with respect to the mesh size h, and the regularization parameter β, for a range of problems
    • …
    corecore