224 research outputs found

    On the Early History of the Singular Value Decomposition

    Get PDF
    This paper surveys the contributions of five mathematicians\,---\,Eugenio Beltrami (1835--1899), Camille Jordan (1838--1921), James Joseph Sylvester (1814--1897), Erhard Schmidt (1876--1959), and Hermann Weyl (1885--1955)\,---\,who were responsible for establishing the existence of the singular value decomposition and developing its theory. (Also cross-referenced as UMIACS-TR-92-31

    Shrinkage Function And Its Applications In Matrix Approximation

    Full text link
    The shrinkage function is widely used in matrix low-rank approximation, compressive sensing, and statistical estimation. In this article, an elementary derivation of the shrinkage function is given. In addition, applications of the shrinkage function are demonstrated in solving several well-known problems, together with a new result in matrix approximation

    Joint Covariance Estimation with Mutual Linear Structure

    Full text link
    We consider the problem of joint estimation of structured covariance matrices. Assuming the structure is unknown, estimation is achieved using heterogeneous training sets. Namely, given groups of measurements coming from centered populations with different covariances, our aim is to determine the mutual structure of these covariance matrices and estimate them. Supposing that the covariances span a low dimensional affine subspace in the space of symmetric matrices, we develop a new efficient algorithm discovering the structure and using it to improve the estimation. Our technique is based on the application of principal component analysis in the matrix space. We also derive an upper performance bound of the proposed algorithm in the Gaussian scenario and compare it with the Cramer-Rao lower bound. Numerical simulations are presented to illustrate the performance benefits of the proposed method

    Assessing Protein Conformational Sampling Methods Based on Bivariate Lag-Distributions of Backbone Angles

    Get PDF
    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence–structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu.edu/∼madoliat/LagSVD) that can be used to produce informative animations

    Integrating Data Transformation in Principal Components Analysis

    Get PDF
    Principal component analysis (PCA) is a popular dimension-reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples. Supplementary materials for this article are available online

    Modified Linear Projection for Large Spatial Data Sets

    Full text link
    Recent developments in engineering techniques for spatial data collection such as geographic information systems have resulted in an increasing need for methods to analyze large spatial data sets. These sorts of data sets can be found in various fields of the natural and social sciences. However, model fitting and spatial prediction using these large spatial data sets are impractically time-consuming, because of the necessary matrix inversions. Various methods have been developed to deal with this problem, including a reduced rank approach and a sparse matrix approximation. In this paper, we propose a modification to an existing reduced rank approach to capture both the large- and small-scale spatial variations effectively. We have used simulated examples and an empirical data analysis to demonstrate that our proposed approach consistently performs well when compared with other methods. In particular, the performance of our new method does not depend on the dependence properties of the spatial covariance functions.Comment: 29 pages, 5 figures, 4 table
    • …
    corecore