223 research outputs found

    Computing the Conditioning of the Components of a Linear Least Squares Solution

    Full text link
    In this paper, we address the accuracy of the results for the overdetermined full rank linear least squares problem. We recall theoretical results obtained in Arioli, Baboulin and Gratton, SIMAX 29(2):413--433, 2007, on conditioning of the least squares solution and the components of the solution when the matrix perturbations are measured in Frobenius or spectral norms. Then we define computable estimates for these condition numbers and we interpret them in terms of statistical quantities. In particular, we show that, in the classical linear statistical model, the ratio of the variance of one component of the solution by the variance of the right-hand side is exactly the condition number of this solution component when perturbations on the right-hand side are considered. We also provide fragment codes using LAPACK routines to compute the variance-covariance matrix and the least squares conditioning and we give the corresponding computational cost. Finally we present a small historical numerical example that was used by Laplace in Theorie Analytique des Probabilites, 1820, for computing the mass of Jupiter and experiments from the space industry with real physical data

    OPM, a collection of Optimization Problems in Matlab

    Get PDF
    OPM is a small collection of CUTEst unconstrained and bound-constrained nonlinear optimization problems, which can be used in Matlab for testing optimization algorithms directly (i.e. without installing additional software)

    Adaptive Regularization Minimization Algorithms with Non-Smooth Norms and Euclidean Curvature

    Get PDF
    A regularization algorithm (AR1pGN) for unconstrained nonlinear minimization is considered, which uses a model consisting of a Taylor expansion of arbitrary degree and regularization term involving a possibly non-smooth norm. It is shown that the non-smoothness of the norm does not affect the O(Ï”1−(p+1)/p)O(\epsilon_1^{-(p+1)/p}) upper bound on evaluation complexity for finding first-order Ï”1\epsilon_1-approximate minimizers using pp derivatives, and that this result does not hinge on the equivalence of norms in ℜn\Re^n. It is also shown that, if p=2p=2, the bound of O(Ï”2−3)O(\epsilon_2^{-3}) evaluations for finding second-order Ï”2\epsilon_2-approximate minimizers still holds for a variant of AR1pGN named AR2GN, despite the possibly non-smooth nature of the regularization term. Moreover, the adaptation of the existing theory for handling the non-smoothness results in an interesting modification of the subproblem termination rules, leading to an even more compact complexity analysis. In particular, it is shown when the Newton's step is acceptable for an adaptive regularization method. The approximate minimization of quadratic polynomials regularized with non-smooth norms is then discussed, and a new approximate second-order necessary optimality condition is derived for this case. An specialized algorithm is then proposed to enforce the first- and second-order conditions that are strong enough to ensure the existence of a suitable step in AR1pGN (when p=2p=2) and in AR2GN, and its iteration complexity is analyzed.Comment: A correction will be available soo

    A Flexible Generalized Conjugate Residual Method with Inner Orthogonalization and Deflated Restarting

    Get PDF
    International audienceThis work is concerned with the development and study of a minimum residual norm subspace method based on the generalized conjugate residual method with inner orthogonalization (GCRO) method that allows flexible preconditioning and deflated restarting for the solution of non-symmetric or non-Hermitian linear systems. First we recall the main features of flexible generalized minimum residual with deflated restarting (FGMRES-DR), a recently proposed algorithm of the same family but based on the GMRES method. Next we introduce the new inner-outer subspace method named FGCRO-DR. A theoretical comparison of both algorithms is then made in the case of flexible preconditioning. It is proved that FGCRO-DR and FGMRES-DR are algebraically equivalent if a collinearity condition is satisfied. While being nearly as expensive as FGMRES-DR in terms of computational operations per cycle, FGCRO-DR offers the additional advantage to be suitable for the solution of sequences of slowly changing linear systems (where both the matrix and right-hand side can change) through subspace recycling. Numerical experiments on the solution of multidimensional elliptic partial differential equations show the efficiency of FGCRO-DR when solving sequences of linear systems

    4DVAR by ensemble Kalman smoother

    Full text link
    We propose to use the ensemble Kalman smoother (EnKS) as linear least squares solver in the Gauss-Newton method for the large nonlinear least squares in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Further, adding a regularization term results in replacing the Gauss-Newton method, which may diverge, by^M the Levenberg-Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS.Comment: 9 page
    • 

    corecore