11,593 research outputs found

    Updating preconditioners for modified least squares problems

    Full text link
    [EN] In this paper, we analyze how to update incomplete Cholesky preconditioners to solve least squares problems using iterative methods when the set of linear relations is updated with some new information, a new variable is added or, contrarily, some information or variable is removed from the set. Our proposed method computes a low-rank update of the preconditioner using a bordering method which is inexpensive compared with the cost of computing a new preconditioner. Moreover, the numerical experiments presented show that this strategy gives, in many cases, a better preconditioner than other choices, including the computation of a new preconditioner from scratch or reusing an existing one.Partially supported by Spanish Grants MTM2014-58159-P and MTM2015-68805-REDT.Marín Mateos-Aparicio, J.; Mas Marí, J.; Guerrero-Flores, DJ.; Hayami, K. (2017). Updating preconditioners for modified least squares problems. Numerical Algorithms. 75(2):491-508. https://doi.org/10.1007/s11075-017-0315-zS491508752Alexander, S.T., Pan, C.T., Plemmons, R.J.: Analysis of a recursive least squares hyperbolic rotation algorithm for signal processing. Linear Algebra Appl. 98, 3–40 (1988)Andrew, R., Dingle, N.: Implementing QR factorization updating algorithms on GPUs. Parallel Comput. 40(7), 161–172 (2014). doi: 10.1016/j.parco.2014.03.003 . http://www.sciencedirect.com/science/article/pii/S0167819114000337 . 7th Workshop on Parallel Matrix Algorithms and ApplicationsBenzi, M., T˚uma, M.: A robust incomplete factorization preconditioner for positive definite matrices. Numer. Linear Algebra Appl. 10(5-6), 385–400 (2003)Benzi, M., Szyld, D.B., Van Duin, A.: Orderings for incomplete factorization preconditioning of nonsymmetric problems. SIAM J. Sci. Comput. 20(5), 1652–1670 (1999)Björck, Å.: Numerical methods for Least Squares Problems. SIAM, Philadelphia (1996)Bru, R., Marín, J., Mas, J., T˚uma, M.: Preconditioned iterative methods for solving linear least squares problems. SIAM J. Sci. Comput. 36(4), A2002–A2022 (2014)Cerdán, J., Marín, J., Mas, J.: Low-rank upyears of balanced incomplete factorization preconditioners. Numer. Algorithms. doi: 10.1007/s11075-016-0151-6 (2016)Chambers, J.M.: Regression updating. J. Amer. Statist. Assoc. 66, 744–748 (1971)Davis, T.A., Hu, Y.: The university of florida sparse matrix collection. ACM trans. Math. Software 38(1), 1–25 (2011)Davis, T.A., Hager, W.W.: Modifying a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 20, 606–627 (1999)Davis, T.A., Hager, W.W.: Multiple-rank modifications of a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 22, 997–1013 (2001)Davis, T.A., Hager, W.W.: Row modification of a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 26, 621–639 (2005)Hammarling, S., Lucas, C.: Updating the QR factorization and the least squares problem. Tech. rep., The University of Manchester, http://www.manchester.ac.uk/mims/eprints (2008)Olsson, O., Ivarsson, T.: Using the QR factorization to swiftly upyear least squares problems. Thesis report, Centre for Mathematical Sciences. The Faculty of Engineering at Lund University LTH (2014)Pothen, A., Fan, C.J.: Computing the block triangular form of a sparse matrix. ACM Trans. Math. Software 16, 303–324 (1990)Saad, Y.: ILUT: A dual threshold incomplete LU factorization. Numer. Linear Algebra Appl. 1(4), 387–402 (1994)Saad, Y.: Iterative Methods for Sparse Linear Systems. PWS Publishing Co., Boston (1996

    A fast semi-direct least squares algorithm for hierarchically block separable matrices

    Full text link
    We present a fast algorithm for linear least squares problems governed by hierarchically block separable (HBS) matrices. Such matrices are generally dense but data-sparse and can describe many important operators including those derived from asymptotically smooth radial kernels that are not too oscillatory. The algorithm is based on a recursive skeletonization procedure that exposes this sparsity and solves the dense least squares problem as a larger, equality-constrained, sparse one. It relies on a sparse QR factorization coupled with iterative weighted least squares methods. In essence, our scheme consists of a direct component, comprised of matrix compression and factorization, followed by an iterative component to enforce certain equality constraints. At most two iterations are typically required for problems that are not too ill-conditioned. For an M×NM \times N HBS matrix with M≥NM \geq N having bounded off-diagonal block rank, the algorithm has optimal O(M+N)\mathcal{O} (M + N) complexity. If the rank increases with the spatial dimension as is common for operators that are singular at the origin, then this becomes O(M+N)\mathcal{O} (M + N) in 1D, O(M+N3/2)\mathcal{O} (M + N^{3/2}) in 2D, and O(M+N2)\mathcal{O} (M + N^{2}) in 3D. We illustrate the performance of the method on both over- and underdetermined systems in a variety of settings, with an emphasis on radial basis function approximation and efficient updating and downdating.Comment: 24 pages, 8 figures, 6 tables; to appear in SIAM J. Matrix Anal. App

    full-FORCE: A Target-Based Method for Training Recurrent Networks

    Get PDF
    Trained recurrent networks are powerful tools for modeling dynamic neural computations. We present a target-based method for modifying the full connectivity matrix of a recurrent network to train it to perform tasks involving temporally complex input/output transformations. The method introduces a second network during training to provide suitable "target" dynamics useful for performing the task. Because it exploits the full recurrent connectivity, the method produces networks that perform tasks with fewer neurons and greater noise robustness than traditional least-squares (FORCE) approaches. In addition, we show how introducing additional input signals into the target-generating network, which act as task hints, greatly extends the range of tasks that can be learned and provides control over the complexity and nature of the dynamics of the trained, task-performing network.Comment: 20 pages, 8 figure

    A theory of linear estimation

    Get PDF
    Theory of linear estimation and applicability to problems of smoothing, filtering, extrapolation, and nonlinear estimatio
    • …
    corecore