6,387 research outputs found

    Incremental Principal Component Analysis Exact implementation and continuity corrections

    Full text link
    This paper describes some applications of an incremental implementation of the principal component analysis (PCA). The algorithm updates the transformation coefficients matrix on-line for each new sample, without the need to keep all the samples in memory. The algorithm is formally equivalent to the usual batch version, in the sense that given a sample set the transformation coefficients at the end of the process are the same. The implications of applying the PCA in real time are discussed with the help of data analysis examples. In particular we focus on the problem of the continuity of the PCs during an on-line analysis.Comment: accepted at http://www.icinco.org

    Manifold Based Deep Learning: Advances and Machine Learning Applications

    Get PDF
    Manifolds are topological spaces that are locally Euclidean and find applications in dimensionality reduction, subspace learning, visual domain adaptation, clustering, and more. In this dissertation, we propose a framework for linear dimensionality reduction called the proxy matrix optimization (PMO) that uses the Grassmann manifold for optimizing over orthogonal matrix manifolds. PMO is an iterative and flexible method that finds the lower-dimensional projections for various linear dimensionality reduction methods by changing the objective function. PMO is suitable for Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Maximum Autocorrelation Factors (MAF), and Locality Preserving Projections (LPP). We extend PMO to incorporate robust Lp-norm versions of PCA and LDA, which uses fractional p-norms making them more robust to noisy data and outliers. The PMO method is designed to be realized as a layer in a neural network for maximum benefit. In order to do so, the incremental versions of PCA, LDA, and LPP are included in the PMO framework for problems where the data is not all available at once. Next, we explore the topic of domain shift in visual domain adaptation by combining concepts from spherical manifolds and deep learning. We investigate domain shift, which quantifies how well a model trained on a source domain adapts to a similar target domain with a metric called Spherical Optimal Transport (SpOT). We adopt the spherical manifold along with an orthogonal projection loss to obtain the features from the source and target domains. We then use the optimal transport with the cosine distance between the features as a way to measure the gap between the domains. We show, in our experiments with domain adaptation datasets, that SpOT does better than existing measures for quantifying domain shift and demonstrates a better correlation with the gain of transfer across domains

    An overview of the proper generalized decomposition with applications in computational rheology

    Get PDF
    We review the foundations and applications of the proper generalized decomposition (PGD), a powerful model reduction technique that computes a priori by means of successive enrichment a separated representation of the unknown field. The computational complexity of the PGD scales linearly with the dimension of the space wherein the model is defined, which is in marked contrast with the exponential scaling of standard grid-based methods. First introduced in the context of computational rheology by Ammar et al. [3] and [4], the PGD has since been further developed and applied in a variety of applications ranging from the solution of the Schrödinger equation of quantum mechanics to the analysis of laminate composites. In this paper, we illustrate the use of the PGD in four problem categories related to computational rheology: (i) the direct solution of the Fokker-Planck equation for complex fluids in configuration spaces of high dimension, (ii) the development of very efficient non-incremental algorithms for transient problems, (iii) the fully three-dimensional solution of problems defined in degenerate plate or shell-like domains often encountered in polymer processing or composites manufacturing, and finally (iv) the solution of multidimensional parametric models obtained by introducing various sources of problem variability as additional coordinates
    corecore