72 research outputs found

    A recursive three-stage least squares method for large-scale systems of simultaneous equations

    Get PDF
    A new numerical method is proposed that uses the QR decomposition (and its variants) to derive recursively the three-stage least squares (3SLS) estimator of large-scale simultaneous equations models (SEM). The 3SLS estimator is obtained sequentially, once the underlying model is modified, by adding or deleting rows of data. A new theoretical pseudo SEM is developed which has a non positive definite dispersion matrix and is proved to yield the 3SLS estimator that would be derived if the modified SEM was estimated afresh. In addition, the computation of the iterative 3SLS estimator of the updated observations SEM is considered. The new recursive method utilizes efficiently previous computations, exploits sparsity in the pseudo SEM and uses as main computational tool orthogonal and hyperbolic matrix factorizations. This allows the estimation of large-scale SEMs which previously could have been considered computationally infeasible to tackle. Numerical trials have confirmed the effectiveness of the new estimation procedures. The new method is illustrated through a macroeconomic application

    Estimating large-scale general linear and seemingly unrelated regressions models after deleting observations

    Get PDF
    A new numerical method to solve the downdating problem (and variants thereof), namely removing the effect of some observations from the generalized least squares (GLS) estimator of the general linear model (GLM) after it has been estimated, is extensively investigated. It is verified that the solution of the downdated least squares problem can be obtained from the estimation of an equivalent GLM, where the original model is updated with the imaginary deleted observations. This updated GLM has a non positive definite dispersion matrix which comprises complex covariance values and it is proved herein to yield the same normal equations as the downdated model. Additionally, the problem of deleting observations from the seemingly unrelated regressions model is addressed, demonstrating the direct applicability of this method to other multivariate linear models. The algorithms which implement the novel downdating method utilize efficiently the previous computations from the estimation of the original model. As a result, the computational cost is significantly reduced. This shows the great usability potential of the downdating method in computationally intensive problems. The downdating algorithms have been applied to real and synthetic data to illustrate their efficiency

    Algorithms for Computing the QR Decomposition of a Set of Matrices with Common Columns

    Get PDF
    The QR decomposition of a set of matrices which have common columns is investigated. The triangular factors of the QR decompositions are represented as nodes of a weighted directed graph. An edge between two nodes exists if and only if the columns of one of the matrices is a subset of the columns of the other. The weight of an edge denotes the computational complexity of deriving the triangular factor of the destination node from that of the source node. The problem is equivalent to constructing the graph and finding the minimum cost for visiting all the nodes. An algorithm which computes the QR decompositions by deriving the minimum spanning tree of the graph is proposed. Theoretical measures of complexity are derived and numerical results from the implementation of this and alternative heuristic algorithms are give

    Greedy Givens algorithms for computing the rank-k updating of the QR decomposition

    Get PDF
    Abstract A Greedy Givens algorithm for computing the rank-1 updating of the QR decomposition is proposed. An exclusive-read exclusive-write parallel random access machine computational model is assumed. The complexity of the algorithms is calculated in two different ways. In the unlimited parallelism case a single time unit is required to apply a compound disjoint Givens rotation of any size. In the limited parallelism case all the disjoint Givens rotations can be applied simultaneously, but one time unit is required to apply a rotation to a two-element vector. The proposed Greedy algorithm requires approximately 5=8 the number of steps performed by the conventional sequential Givens rank-1 algorithm under unlimited parallelism. A parallel implementation of the sequential Givens algorithm outperforms the Greedy one under limited parallelism. An adaptation of the Greedy algorithm to compute the rank-k updating of the QR decomposition has been developed. This algorithm outperforms a recently reported parallel method for small k, but its efficiency decreases as k increases

    Ordinary linear model estimation on a massively parallel simd computer

    No full text
    Efficient algorithms for estimating the coefficient parameters of the ordinary linear model on a massively parallel SIMD computer are presented. The numerical stability of the algorithms is ensured by using orthogonal transformations in the form of Householder reflections and Givens plane rotations to compute the complete orthogonal decomposition of the coefficient matrix. Algorithms for reconstructing the orthogonal matrices involved in the decompositions are also designed, implemented and analyzed. The implementation of all algorithms on the targeted SIMD array processor is considered in detail. Timing models for predicting the execution time of the implementations are derived using regression modelling methods. The timing models also provide an insight into how the algorithms interact with the parallel computer. The predetermined factors used in the regression fits are derived from the number of memory layers involved in the factorization process of the matrices. Experimental results show the high accuracy and predictive power of the timing models

    CSDA Special Issues

    No full text

    Parallel strategies for solving sure models with variance inequalities and positivity of correlations constraints

    No full text
    The problem of computing estimates of parameters in SURE models with variance inequalities and positivity of correlations constraints is considered. Efficient algorithms that exploit the block bi-diagonal structure of the data matrix are presented. The computational complexity of the main matrix factorizations is analyzed. A compact method to solve the model with proper subset regressors is proposed

    Inconsistencies in SURE models : computational aspects

    No full text
    The solution of the SURE model with singular variance-covariance matrix results in redundancies and possibly inconsistencies among the observations of the model. A numerical procedure is proposed and investigated that generates a consistent model from an inconsistent one. The use of SVD has been used to compute the various factorizations arising in the solution of the SURE model when treated as a generalized linear least squares problem

    Handbook of parallel computing and statistics

    No full text
    Technological improvements continue to push back the frontier of processor speed in modern computers. Unfortunately, the computational intensity demanded by modern research problems grows even faster. Parallel computing has emerged as the most successful bridge to this computational gap, and many popular solutions have emerged based on its concepts, such as grid computing and massively parallel supercomputers. The Handbook of Parallel Computing and Statistics systematically applies the principles of parallel computing for solving increasingly complex problems in statistics research. This unique reference weaves together the principles and theoretical models of parallel computing with the design, analysis, and application of algorithms for solving statistical problems. After a brief introduction to parallel computing, the book explores the architecture, programming, and computational aspects of parallel processing. Focus then turns to optimization methods followed by statistical applications. These applications include algorithms for predictive modeling, adaptive design, real-time estimation of higher-order moments and cumulants, data mining, econometrics, and Bayesian computation. Expert contributors summarize recent results and explore new directions in these areas. Its intricate combination of theory and practical applications makes the Handbook of Parallel Computing and Statistics an ideal companion for helping solve the abundance of computation-intensive statistical problems arising in a variety of fields
    corecore