148 research outputs found

    Computing 3SLS Solutions of Simultaneous Equation Models with a Possible Singular Variance-Convariance Matrix

    Get PDF
    Algorithms for computing the three-stage least squares (3SLS) estimator usually require the disturbance convariance matrix to be non-singular. However, the solution of a reformulated simultaneous equation model (SEM) results into the redundancy of this condition. Having as a basic tool the QR decomposition, the 3SLS estimator, its dispersion matrix and methods for estimating the singular disturbance covariance matrix and derived. Expressions revealing linear combinations between the observations which become redundant have also been presented. Algorithms for computing the 3SLS estimator after the SEM have been modified by deleting or adding new observations or variables are found not to be very efficient, due to the necessity of removing the endogeneity of the new data or by re-estimating the disturbance covariance matrix. Three methods have been described for solving SEMs subject to separable linear equalities constraints. The first method considers the constraints as additional precise observations while the other two methods reparameterized the constraints to solve reduced unconstrained SEMs. Method for computing the main matrix factorizations illustrate the basic principles to be adopted for solving SEMs on serial or parallel computer

    Algorithms for Computing the QR Decomposition of a Set of Matrices with Common Columns

    Get PDF
    The QR decomposition of a set of matrices which have common columns is investigated. The triangular factors of the QR decompositions are represented as nodes of a weighted directed graph. An edge between two nodes exists if and only if the columns of one of the matrices is a subset of the columns of the other. The weight of an edge denotes the computational complexity of deriving the triangular factor of the destination node from that of the source node. The problem is equivalent to constructing the graph and finding the minimum cost for visiting all the nodes. An algorithm which computes the QR decompositions by deriving the minimum spanning tree of the graph is proposed. Theoretical measures of complexity are derived and numerical results from the implementation of this and alternative heuristic algorithms are give

    Efficient strategies for deriving the subset VAR models

    Get PDF
    Abstract.: Algorithms for computing the subset Vector Autoregressive (VAR) models are proposed. These algorithms can be used to choose a subset of the most statistically-significant variables of a VAR model. In such cases, the selection criteria are based on the residual sum of squares or the estimated residual covariance matrix. The VAR model with zero coefficient restrictions is formulated as a Seemingly Unrelated Regressions (SUR) model. Furthermore, the SUR model is transformed into one of smaller size, where the exogenous matrices comprise columns of a triangular matrix. Efficient algorithms which exploit the common columns of the exogenous matrices, sparse structure of the variance-covariance of the disturbances and special properties of the SUR models are investigated. The main computational tool of the selection strategies is the generalized QR decomposition and its modificatio

    Greedy Givens algorithms for computing the rank-k updating of the QR decomposition

    Get PDF
    Abstract A Greedy Givens algorithm for computing the rank-1 updating of the QR decomposition is proposed. An exclusive-read exclusive-write parallel random access machine computational model is assumed. The complexity of the algorithms is calculated in two different ways. In the unlimited parallelism case a single time unit is required to apply a compound disjoint Givens rotation of any size. In the limited parallelism case all the disjoint Givens rotations can be applied simultaneously, but one time unit is required to apply a rotation to a two-element vector. The proposed Greedy algorithm requires approximately 5=8 the number of steps performed by the conventional sequential Givens rank-1 algorithm under unlimited parallelism. A parallel implementation of the sequential Givens algorithm outperforms the Greedy one under limited parallelism. An adaptation of the Greedy algorithm to compute the rank-k updating of the QR decomposition has been developed. This algorithm outperforms a recently reported parallel method for small k, but its efficiency decreases as k increases

    Generalized Low Rank Models

    Full text link
    Principal components analysis (PCA) is a well-known technique for approximating a tabular data set by a low rank matrix. Here, we extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, kk-means, kk-SVD, and maximum margin matrix factorization. The method handles heterogeneous data sets, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously. It also admits a number of interesting interpretations of the low rank factors, which allow clustering of examples or of features. We propose several parallel algorithms for fitting generalized low rank models, and describe implementations and numerical results.Comment: 84 pages, 19 figure

    Bayesian inference for structured additive regression models for large-scale problems with applications to medical imaging

    Get PDF
    In der angewandten Statistik können Regressionsmodelle mit hochdimensionalen Koeffizienten auftreten, die sich nicht mit gewöhnlichen Computersystemen schätzen lassen. Dies betrifft unter anderem die Analyse digitaler Bilder unter Berücksichtigung räumlich-zeitlicher Abhängigkeiten, wie sie innerhalb der medizinisch-biologischen Forschung häufig vorkommen. In der vorliegenden Arbeit wird ein Verfahren formuliert, das in der Lage ist, Regressionsmodelle mit hochdimensionalen Koeffizienten und nicht-normalverteilten Zielgrößen unter moderaten Anforderungen an die benötigte Hardware zu schätzen. Hierzu wird zunächst im Rahmen strukturiert additiver Regressionsmodelle aufgezeigt, worin die Limitationen aktueller Inferenzansätze bei der Anwendung auf hochdimensionale Problemstellungen liegen, sowie Möglichkeiten diskutiert, diese zu umgehen. Darauf basierend wird ein Algorithmus formuliert, dessen Stärken und Schwächen anhand von Simulationsstudien analysiert werden. Darüber hinaus findet das Verfahren Anwendung in drei verschiedenen Bereichen der medizinisch-biologischen Bildgebung und zeigt dadurch, dass es ein vielversprechender Kandidat für die Beantwortung hochdimensionaler Fragestellungen ist.In applied statistics regression models with high-dimensional coefficients can occur which cannot be estimated using ordinary computers. Amongst others, this applies to the analysis of digital images taking spatio-temporal dependencies into account as they commonly occur within bio-medical research. In this thesis a procedure is formulated which allows to fit regression models with high-dimensional coefficients and non-normal response values requiring only moderate computational equipment. To this end, limitations of different inference strategies for structured additive regression models are demonstrated when applied to high-dimensional problems and possible solutions are discussed. Based thereon an algorithm is formulated whose strengths and weaknesses are subsequently analyzed using simulation studies. Furthermore, the procedure is applied to three different fields of bio-medical imaging from which can be concluded that the algorithm is a promising candidate for answering high-dimensional problems

    Numerical Linear Algebra applications in Archaeology: the seriation and the photometric stereo problems

    Get PDF
    The aim of this thesis is to explore the application of Numerical Linear Algebra to Archaeology. An ordering problem called the seriation problem, used for dating findings and/or artifacts deposits, is analysed in terms of graph theory. In particular, a Matlab implementation of an algorithm for spectral seriation, based on the use of the Fiedler vector of the Laplacian matrix associated with the problem, is presented. We consider bipartite graphs for describing the seriation problem, since the interrelationship between the units (i.e. archaeological sites) to be reordered, can be described in terms of these graphs. In our archaeological metaphor of seriation, the two disjoint nodes sets into which the vertices of a bipartite graph can be divided, represent the excavation sites and the artifacts found inside them. Since it is a difficult task to determine the closest bipartite network to a given one, we describe how a starting network can be approximated by a bipartite one by solving a sequence of fairly simple optimization problems. Another numerical problem related to Archaeology is the 3D reconstruction of the shape of an object from a set of digital pictures. In particular, the Photometric Stereo (PS) photographic technique is considered
    • …
    corecore