599,363 research outputs found

    Randomized Local Model Order Reduction

    Get PDF
    In this paper we propose local approximation spaces for localized model order reduction procedures such as domain decomposition and multiscale methods. Those spaces are constructed from local solutions of the partial differential equation (PDE) with random boundary conditions, yield an approximation that converges provably at a nearly optimal rate, and can be generated at close to optimal computational complexity. In many localized model order reduction approaches like the generalized finite element method, static condensation procedures, and the multiscale finite element method local approximation spaces can be constructed by approximating the range of a suitably defined transfer operator that acts on the space of local solutions of the PDE. Optimal local approximation spaces that yield in general an exponentially convergent approximation are given by the left singular vectors of this transfer operator [I. Babu\v{s}ka and R. Lipton 2011, K. Smetana and A. T. Patera 2016]. However, the direct calculation of these singular vectors is computationally very expensive. In this paper, we propose an adaptive randomized algorithm based on methods from randomized linear algebra [N. Halko et al. 2011], which constructs a local reduced space approximating the range of the transfer operator and thus the optimal local approximation spaces. The adaptive algorithm relies on a probabilistic a posteriori error estimator for which we prove that it is both efficient and reliable with high probability. Several numerical experiments confirm the theoretical findings.Comment: 31 pages, 14 figures, 1 table, 1 algorith

    Randomized Dimension Reduction on Massive Data

    Full text link
    Scalability of statistical estimators is of increasing importance in modern applications and dimension reduction is often used to extract relevant information from data. A variety of popular dimension reduction approaches can be framed as symmetric generalized eigendecomposition problems. In this paper we outline how taking into account the low rank structure assumption implicit in these dimension reduction approaches provides both computational and statistical advantages. We adapt recent randomized low-rank approximation algorithms to provide efficient solutions to three dimension reduction methods: Principal Component Analysis (PCA), Sliced Inverse Regression (SIR), and Localized Sliced Inverse Regression (LSIR). A key observation in this paper is that randomization serves a dual role, improving both computational and statistical performance. This point is highlighted in our experiments on real and simulated data.Comment: 31 pages, 6 figures, Key Words:dimension reduction, generalized eigendecompositon, low-rank, supervised, inverse regression, random projections, randomized algorithms, Krylov subspace method

    Randomized Dimensionality Reduction for k-means Clustering

    Full text link
    We study the topic of dimensionality reduction for kk-means clustering. Dimensionality reduction encompasses the union of two approaches: \emph{feature selection} and \emph{feature extraction}. A feature selection based algorithm for kk-means clustering selects a small subset of the input features and then applies kk-means clustering on the selected features. A feature extraction based algorithm for kk-means clustering constructs a small set of new artificial features and then applies kk-means clustering on the constructed features. Despite the significance of kk-means clustering as well as the wealth of heuristic methods addressing it, provably accurate feature selection methods for kk-means clustering are not known. On the other hand, two provably accurate feature extraction methods for kk-means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD). This paper makes further progress towards a better understanding of dimensionality reduction for kk-means clustering. Namely, we present the first provably accurate feature selection method for kk-means clustering and, in addition, we present two feature extraction methods. The first feature extraction method is based on random projections and it improves upon the existing results in terms of time complexity and number of features needed to be extracted. The second feature extraction method is based on fast approximate SVD factorizations and it also improves upon the existing results in terms of time complexity. The proposed algorithms are randomized and provide constant-factor approximation guarantees with respect to the optimal kk-means objective value.Comment: IEEE Transactions on Information Theory, to appea
    • …
    corecore