1,431 research outputs found

    Modified Truncated Randomized Singular Value Decomposition (MTRSVD) Algorithms for Large Scale Discrete Ill-posed Problems with General-Form Regularization

    Full text link
    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: min⁑βˆ₯Lxβˆ₯{\min} \|Lx\| subject to min⁑βˆ₯Axβˆ’bβˆ₯{\min} \|Ax - b\|, where LL is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to AA. We use rank-kk truncated randomized SVD (TRSVD) approximations to AA by truncating the rank-(k+q)(k+q) RSVD approximations to AA, where qq is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as kk increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms.Comment: 26 pages, 6 figure

    Perturbation Analysis and Randomized Algorithms for Large-Scale Total Least Squares Problems

    Full text link
    In this paper, we present perturbation analysis and randomized algorithms for the total least squares (TLS) problems. We derive the perturbation bound and check its sharpness by numerical experiments. Motivated by the recently popular probabilistic algorithms for low-rank approximations, we develop randomized algorithms for the TLS and the truncated total least squares (TTLS) solutions of large-scale discrete ill-posed problems, which can greatly reduce the computational time and still keep good accuracy.Comment: 27 pages, 10 figures, 8 table

    Faster SVD-Truncated Least-Squares Regression

    Full text link
    We develop a fast algorithm for computing the "SVD-truncated" regularized solution to the least-squares problem: \min_{\x} \TNorm{\matA \x - \b}. Let \matA_k of rank kk be the best rank kk matrix computed via the SVD of \matA. Then, the SVD-truncated regularized solution is: \x_k = \pinv{\matA}_k \b. If \matA is mΓ—nm \times n, then, it takes O(mnmin⁑{m,n})O(m n \min\{m,n\}) time to compute \x_k using the SVD of \math{\matA}. We give an approximation algorithm for \math{\x_k} which constructs a rank-\math{k} approximation \tilde{\matA}_{k} and computes \tilde{\x}_{k} = \pinv{\tilde\matA}_{k} \b in roughly O(\nnz(\matA) k \log n) time. Our algorithm uses a randomized variant of the subspace iteration. We show that, with high probability: \TNorm{\matA \tilde{\x}_{k} - \b} \approx \TNorm{\matA \x_k - \b} and \TNorm{\x_k - \tilde\x_k} \approx 0. Comment: 2014 IEEE International Symposium on Information Theor

    Randomized Algorithms for Large-scale Inverse Problems with General Regularizations

    Full text link
    We shall investigate randomized algorithms for solving large-scale linear inverse problems with general regularizations. We first present some techniques to transform inverse problems of general form into the ones of standard form, then apply randomized algorithms to reduce large-scale systems of standard form to much smaller-scale systems and seek their regularized solutions in combination with some popular choice rules for regularization parameters. Then we will propose a second approach to solve large-scale ill-posed systems with general regularizations. This involves a new randomized generalized SVD algorithm that can essentially reduce the size of the original large-scale ill-posed systems. The reduced systems can provide approximate regularized solutions with about the same accuracy as the ones by the classical generalized SVD, and more importantly, the new approach gains obvious robustness, stability and computational time as it needs only to work on problems of much smaller size. Numerical results are given to demonstrated the efficiency of the algorithms

    Convergence of Regularization Parameters for Solutions Using the Filtered Truncated Singular Value Decomposition

    Full text link
    The truncated singular value decomposition may be used to find the solution of linear discrete ill-posed problems in conjunction with Tikhonov regularization and requires the estimation of a regularization parameter that balances between the sizes of the fit to data function and the regularization term. The unbiased predictive risk estimator is one suggested method for finding the regularization parameter when the noise in the measurements is normally distributed with known variance. In this paper we provide an algorithm using the unbiased predictive risk estimator that automatically finds both the regularization parameter and the number of terms to use from the singular value decomposition. Underlying the algorithm is a new result that proves that the regularization parameter converges with the number of terms from the singular value decomposition. For the analysis it is sufficient to assume that the discrete Picard condition is satisfied for exact data and that noise completely contaminates the measured data coefficients for a sufficiently large number of terms, dependent on both the noise level and the degree of ill-posedness of the system. A lower bound for the regularization parameter is provided leading to a computationally efficient algorithm. Supporting results are compared with those obtained using the method of generalized cross validation. Simulations for two-dimensional examples verify the theoretical analysis and the effectiveness of the algorithm for increasing noise levels, and demonstrate that the relative reconstruction errors obtained using the truncated singular value decomposition are less than those obtained using the singular value decomposition. This is a pre-print of an article published in BIT Numerical Mathematics. The final authenticated version is available online at: https://doi.org/10.1007%2Fs10543-019-00762-7

    Incremental Truncated LSTD

    Full text link
    Balancing between computational efficiency and sample efficiency is an important goal in reinforcement learning. Temporal difference (TD) learning algorithms stochastically update the value function, with a linear time complexity in the number of features, whereas least-squares temporal difference (LSTD) algorithms are sample efficient but can be quadratic in the number of features. In this work, we develop an efficient incremental low-rank LSTD({\lambda}) algorithm that progresses towards the goal of better balancing computation and sample efficiency. The algorithm reduces the computation and storage complexity to the number of features times the chosen rank parameter while summarizing past samples efficiently to nearly obtain the sample complexity of LSTD. We derive a simulation bound on the solution given by truncated low-rank approximation, illustrating a bias- variance trade-off dependent on the choice of rank. We demonstrate that the algorithm effectively balances computational complexity and sample efficiency for policy evaluation in a benchmark task and a high-dimensional energy allocation domain.Comment: Accepted to IJCAI 201

    Regularized Reconstruction of a Surface from its Measured Gradient Field

    Full text link
    This paper presents several new algorithms for the regularized reconstruction of a surface from its measured gradient field. By taking a matrix-algebraic approach, we establish general framework for the regularized reconstruction problem based on the Sylvester Matrix Equation. Specifically, Spectral Regularization via Generalized Fourier Series (e.g., Discrete Cosine Functions, Gram Polynomials, Haar Functions, etc.), Tikhonov Regularization, Constrained Regularization by imposing boundary conditions, and regularization via Weighted Least Squares can all be solved expediently in the context of the Sylvester Equation framework. State-of-the-art solutions to this problem are based on sparse matrix methods, which are no better than O!(n6)\mathcal{O}!(n^6) algorithms for an mΓ—nm\times n surface. In contrast, the newly proposed methods are based on the global least squares cost function and are all O!(n3)\mathcal{O}!(n^3) algorithms. In fact, the new algorithms have the same computational complexity as an SVD of the same size. The new algorithms are several orders of magnitude faster than the state-of-the-art; we therefore present, for the first time, Monte-Carlo simulations demonstrating the statistical behaviour of the algorithms when subject to various forms of noise. We establish methods that yield the lower bound of their respective cost functions, and therefore represent the "Gold-Standard" benchmark solutions for the various forms of noise. The new methods are the first algorithms for regularized reconstruction on the order of megapixels, which is essential to methods such as Photometric Stereo

    Quantum Regularized Least Squares Solver with Parameter Estimate

    Full text link
    In this paper we propose a quantum algorithm to determine the Tikhonov regularization parameter and solve the ill-conditioned linear equations, for example, arising from the finite element discretization of linear or nonlinear inverse problems. For regularized least squares problem with a fixed regularization parameter, we use the HHL algorithm and work on an extended matrix with smaller condition number. For the determination of the regularization parameter, we combine the classical L-curve and GCV function, and design quantum algorithms to compute the norms of regularized solution and the corresponding residual in parallel and locate the best regularization parameter by Grover's search. The quantum algorithm can achieve a quadratic speedup in the number of regularization parameters and an exponential speedup in the dimension of problem size

    Global Optimization methods for Gravitational Lens Systems with Regularized Sources

    Full text link
    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters. The second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey

    Regularization Properties of the Krylov Iterative Solvers CGME and LSMR For Linear Discrete Ill-Posed Problems with an Application to Truncated Randomized SVDs

    Full text link
    For the large-scale linear discrete ill-posed problem min⁑βˆ₯Axβˆ’bβˆ₯\min\|Ax-b\| or Ax=bAx=b with bb contaminated by Gaussian white noise, there are four commonly used Krylov solvers: LSQR and its mathematically equivalent CGLS, the Conjugate Gradient (CG) method applied to ATAx=ATbA^TAx=A^Tb, CGME, the CG method applied to min⁑βˆ₯AATyβˆ’bβˆ₯\min\|AA^Ty-b\| or AATy=bAA^Ty=b with x=ATyx=A^Ty, and LSMR, the minimal residual (MINRES) method applied to ATAx=ATbA^TAx=A^Tb. These methods have intrinsic regularizing effects, where the number kk of iterations plays the role of the regularization parameter. In this paper, we establish a number of regularization properties of CGME and LSMR, including the filtered SVD expansion of CGME iterates, and prove that the 2-norm filtering best regularized solutions by CGME and LSMR are less accurate than and at least as accurate as those by LSQR, respectively. We also prove that the semi-convergence of CGME and LSMR always occurs no later and sooner than that of LSQR, respectively. As a byproduct, using the analysis approach for CGME, we improve a fundamental result on the accuracy of the truncated rank kk approximate SVD of AA generated by randomized algorithms, and reveal how the truncation step damages the accuracy. Numerical experiments justify our results on CGME and LSMR.Comment: 30 pages, 7 figure
    • …
    corecore