4,953 research outputs found

    Singular random matrix decompositions: distributions.

    Get PDF
    Assuming that Y has a singular matrix variate elliptically contoured distribution with respect to the Hausdorff measure, the distributions of several matrices associated to QR, modified QR, SV and Polar decompositions of matrix Y are determined, for central and non-central, non-singular and singular cases, as well as their relationship to the Wishart and Pseudo-Wishart generalized singular and non-singular distributions. We present a particular example for the Karhunen-Lòeve decomposition. Some of these results are also applied to two particular subfamilies of elliptical distributions, the singular matrix variate normal distribution and the singular matrix variate symmetric Pearson type VII distribution

    Multivariate Analysis of Mixed Data: The R Package PCAmixdata

    Get PDF
    Mixed data arise when observations are described by a mixture of numerical and categorical variables. The R package PCAmixdata extends standard multivariate analysis methods to incorporate this type of data. The key techniques/methods included in the package are principal component analysis for mixed data (PCAmix), varimax-like orthogonal rotation for PCAmix, and multiple factor analysis for mixed multi-table data. This paper gives a synthetic presentation of the three algorithms with details to help the user understand graphical and numerical outputs of the corresponding R functions. The three main methods are illustrated on a real dataset composed of four data tables characterizing living conditions in different municipalities in the Gironde region of southwest France

    SINGULAR RANDOM MATRIX DECOMPOSITIONS: DISTRIBUTIONS.

    Get PDF
    Assuming that Y has a singular matrix variate elliptically contoured distribution with respect to the Hausdorff measure, the distributions of several matrices associated to QR, modified QR, SV and Polar decompositions of matrix Y are determined, for central and non-central, non-singular and singular cases, as well as their relationship to the Wishart and Pseudo-Wishart generalized singular and non-singular distributions. We present a particular example for the Karhunen-Lòeve decomposition. Some of these results are also applied to two particular subfamilies of elliptical distributions, the singular matrix variate normal distribution and the singular matrix variate symmetric Pearson type VII distribution.

    Efficient Orthogonal Tensor Decomposition, with an Application to Latent Variable Model Learning

    Full text link
    Decomposing tensors into orthogonal factors is a well-known task in statistics, machine learning, and signal processing. We study orthogonal outer product decompositions where the factors in the summands in the decomposition are required to be orthogonal across summands, by relating this orthogonal decomposition to the singular value decompositions of the flattenings. We show that it is a non-trivial assumption for a tensor to have such an orthogonal decomposition, and we show that it is unique (up to natural symmetries) in case it exists, in which case we also demonstrate how it can be efficiently and reliably obtained by a sequence of singular value decompositions. We demonstrate how the factoring algorithm can be applied for parameter identification in latent variable and mixture models

    Nonparametric Estimation of Multi-View Latent Variable Models

    Full text link
    Spectral methods have greatly advanced the estimation of latent variable models, generating a sequence of novel and efficient algorithms with strong theoretical guarantees. However, current spectral algorithms are largely restricted to mixtures of discrete or Gaussian distributions. In this paper, we propose a kernel method for learning multi-view latent variable models, allowing each mixture component to be nonparametric. The key idea of the method is to embed the joint distribution of a multi-view latent variable into a reproducing kernel Hilbert space, and then the latent parameters are recovered using a robust tensor power method. We establish that the sample complexity for the proposed method is quadratic in the number of latent components and is a low order polynomial in the other relevant parameters. Thus, our non-parametric tensor approach to learning latent variable models enjoys good sample and computational efficiencies. Moreover, the non-parametric tensor power method compares favorably to EM algorithm and other existing spectral algorithms in our experiments
    • …
    corecore