233 research outputs found

    An efficient LDU algorithm for the minimal least squares solution of linear systems

    Get PDF
    The minimal least squares solutions is a topic of interest due to the broad range of applications of this problem. Although it can be obtained from other algorithms, such as the Singular Value Decomposition (SVD) or the Complete Orthogonal Decomposition (COD), the use of LDU factorizations has its advantages, namely the computational cost and the low fill-in that can be obtained using this method. If the right and left null-subspaces (which can also be named as Null and Image subspaces, respectively) are to be obtained, the use of these factorizations leads to fundamental subspaces, which are sparse by definition. Here an algorithm that takes advantage of both the Peters-Wilkinson method and Sautter method is presented. This combination allows for a good performance in all cases. The method also optimizes memory use by storing the right null-subspace and the left null-subspace in the factored matrix.The authors wish to thank the Spanish Ministry of Economy and Competitiveness for its support through grant DPI2016-80372-R, which also includes funding through European FEDER program; and the Education Department of the Basque Government for its support through grant IT947-16

    Accurate and Efficient Expression Evaluation and Linear Algebra

    Full text link
    We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By "accurate" we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: Most of our results will use the so-called Traditional Model (TM). We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high accuracy algorithm or a proof that none exists. When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as x+y+zx+y+z, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case. Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl

    Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression

    Full text link
    In this paper, we study convex optimization methods for computing the trace norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection (FES) method, recently proposed by Yuan et al. [22], conducts parameter estimation and factor selection simultaneously and have been shown to enjoy nice properties in both large and finite samples. To compute the estimates, however, can be very challenging in practice because of the high dimensionality and the trace norm constraint. In this paper, we explore a variant of Nesterov's smooth method [20] and interior point methods for computing the penalized least squares estimate. The performance of these methods is then compared using a set of randomly generated instances. We show that the variant of Nesterov's smooth method [20] generally outperforms the interior point method implemented in SDPT3 version 4.0 (beta) [19] substantially . Moreover, the former method is much more memory efficient.Comment: 27 page
    • …
    corecore