3,125 research outputs found

    Spectral Condition Numbers of Orthogonal Projections and Full Rank Linear Least Squares Residuals

    Full text link
    A simple formula is proved to be a tight estimate for the condition number of the full rank linear least squares residual with respect to the matrix of least squares coefficients and scaled 2-norms. The tight estimate reveals that the condition number depends on three quantities, two of which can cause ill-conditioning. The numerical linear algebra literature presents several estimates of various instances of these condition numbers. All the prior values exceed the formula introduced here, sometimes by large factors.Comment: 15 pages, 1 figure, 2 table

    Div First-Order System LL* (FOSLL*) for Second-Order Elliptic Partial Differential Equations

    Full text link
    The first-order system LL* (FOSLL*) approach for general second-order elliptic partial differential equations was proposed and analyzed in [10], in order to retain the full efficiency of the L2 norm first-order system least-squares (FOSLS) ap- proach while exhibiting the generality of the inverse-norm FOSLS approach. The FOSLL* approach in [10] was applied to the div-curl system with added slack vari- ables, and hence it is quite complicated. In this paper, we apply the FOSLL* approach to the div system and establish its well-posedness. For the corresponding finite ele- ment approximation, we obtain a quasi-optimal a priori error bound under the same regularity assumption as the standard Galerkin method, but without the restriction to sufficiently small mesh size. Unlike the FOSLS approach, the FOSLL* approach does not have a free a posteriori error estimator, we then propose an explicit residual error estimator and establish its reliability and efficiency bound

    Scaling Sparse Constrained Nonlinear Problems for Iterative Solvers

    Get PDF
    We look at scaling a nonlinear optimization problem for iterative solvers that use at least first derivatives. These derivatives are either computed analytically or by differncing. We ignore iterative methods that are based on function evaluations only and that do not use any derivative information. We also exclude methods where the full problem structure is unknown like variants of delayed column generation. We look at related work in section (1). Despite its importance as evidenced in widely used implementations of nonlinear programming algorithms, scaling has not received enough attention from a theoretical point of view. What do we mean by scaling a nonlinear problem itself is not very clear. In this paper we attempt a scaling framework definition. We start with a description of a nonlinear problem in section (2). Various authors prefer different forms, but all forms can be converted to the form we show. We then describe our scaling framework in section (3). We show the equivalence between the original problem and the scaled problem. The correctness results of section (3.3) play an important role in the dynamic scaling scheme suggested. In section (4), we develop a prototypical algorithm that can be used to represent a variety of iterative solution methods. Using this we examine the impact of scaling in section (5). In the last section (6), we look at what the goal should be for an ideal scaling scheme and make some implementation suggestions for nonlinear solvers.
    corecore