10,677 research outputs found

    Rank-1 Tensor Approximation Methods and Application to Deflation

    Full text link
    Because of the attractiveness of the canonical polyadic (CP) tensor decomposition in various applications, several algorithms have been designed to compute it, but efficient ones are still lacking. Iterative deflation algorithms based on successive rank-1 approximations can be used to perform this task, since the latter are rather easy to compute. We first present an algebraic rank-1 approximation method that performs better than the standard higher-order singular value decomposition (HOSVD) for three-way tensors. Second, we propose a new iterative rank-1 approximation algorithm that improves any other rank-1 approximation method. Third, we describe a probabilistic framework allowing to study the convergence of deflation CP decomposition (DCPD) algorithms based on successive rank-1 approximations. A set of computer experiments then validates theoretical results and demonstrates the efficiency of DCPD algorithms compared to other ones

    Block Circulant and Toeplitz Structures in the Linearized Hartree–Fock Equation on Finite Lattices: Tensor Approach

    Get PDF
    This paper introduces and analyses the new grid-based tensor approach to approximate solution of the elliptic eigenvalue problem for the 3D lattice-structured systems. We consider the linearized Hartree-Fock equation over a spatial L1×L2×L3L_1\times L_2\times L_3 lattice for both periodic and non-periodic problem setting, discretized in the localized Gaussian-type orbitals basis. In the periodic case, the Galerkin system matrix obeys a three-level block-circulant structure that allows the FFT-based diagonalization, while for the finite extended systems in a box (Dirichlet boundary conditions) we arrive at the perturbed block-Toeplitz representation providing fast matrix-vector multiplication and low storage size. The proposed grid-based tensor techniques manifest the twofold benefits: (a) the entries of the Fock matrix are computed by 1D operations using low-rank tensors represented on a 3D grid, (b) in the periodic case the low-rank tensor structure in the diagonal blocks of the Fock matrix in the Fourier space reduces the conventional 3D FFT to the product of 1D FFTs. Lattice type systems in a box with Dirichlet boundary conditions are treated numerically by our previous tensor solver for single molecules, which makes possible calculations on rather large L1×L2×L3L_1\times L_2\times L_3 lattices due to reduced numerical cost for 3D problems. The numerical simulations for both box-type and periodic L×1×1L\times 1\times 1 lattice chain in a 3D rectangular "tube" with LL up to several hundred confirm the theoretical complexity bounds for the block-structured eigenvalue solvers in the limit of large LL.Comment: 30 pages, 12 figures. arXiv admin note: substantial text overlap with arXiv:1408.383

    강인한 저차원 공간의 학습과 분류: 희소 및 저계수 표현

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 오성회.Learning a subspace structure based on sparse or low-rank representation has gained much attention and has been widely used over the past decade in machine learning, signal processing, computer vision, and robotic literatures to model a wide range of natural phenomena. Sparse representation is a powerful tool for high-dimensional data such as images, where the goal is to represent or compress the cumbersome data using a few representative samples. Low-rank representation is a generalization of the sparse representation in 2D space. Behind the successful outcomes, many efforts have been made for learning sparse or low-rank representation effciently. However, they are still ineffcient for complex data structures and lack robustness under the existence of various noises including outliers and missing data, because many existing algorithms relax the ideal optimization problem to a tractable one without considering computational and memory complexities. Thus, it is important to use a good representation algorithm which is effciently solvable and robust against unwanted corruptions. In this dissertation, our main goal is to learn algorithms with both robustness and effciency under noisy environments. As for sparse representation, most of the optimization problems are relaxed to convex ones based on surrogate measures, such as the l1-norm, to resolve the computational intractability and high noise sensitivity of the original sparse representation problem based on the l0-norm. However, if the system at interest, other than the sparsity measure, is inherently nonconvex, then using a convex sparsity measure may not be the best choice for the problems. From this perspective, we propose desirable criteria to be a good nonconvex sparsity measure and suggest a corresponding family of measure. The proposed family of measures allows a simple measure, which enables effcient computation and embraces the benefits of both l0- and l1-norms, and most importantly, its gradient vanishes slowly unlike the l0-norm, which is suitable from an optimization perspective. For low-rank representation, we first present an effcient l1-norm based low-rank matrix approximation algorithm using the proposed alternating rectified gradient methods to solve an l1-norm minimization problem, since conventional algorithms are very slow to solve the l1-norm based alternating minimization problem. The proposed methods try to find an optimal direction with a proper constraint which limits the search domain to avoid the diffculty that arises from the ambiguity in representing the two optimization variables. It is extended to an algorithm with an explicit smoothness regularizer and an orthogonality constraint for better effciency and solve it under the augmented Lagrangian framework. To give more stable solution with flexible rank estimation in the presence of heavy corruptions, we present a new solution based on the elastic-net regularization of singular values, which allows a faster algorithm than existing rank minimization methods without any heavy operations and is more stable than the state-of-the-art low-rank approximation algorithms due to its strong convexity. As a result, the proposed method leads to a holistic approach which enables both rank minimization and bilinear factorization. Moreover, as an extension to the previous methods performing on an unstructured matrix, we apply recent advances in rank minimization to a structured matrix for robust kernel subspace estimation under noisy scenarios. Lastly, but not least, we extend a low-rank approximation problem, which assumes a single subspace, to a problem which lies in a union of multiple subspaces, which is closely related to subspace clustering. While many recent studies are based on sparse or low-rank representation, the grouping effect among similar samples has not been often considered with the sparse or low-rank representation. Thus, we propose a robust group subspace clustering lgorithms based on sparse and low-rank representation with explicit subspace grouping. To resolve the fundamental issue on computational complexity of existing subspace clustering algorithms, we suggest a full scalable low-rank subspace clustering approach, which achieves linear complexity in the number of samples. Extensive experimental results on various applications, including computer vision and robotics, using benchmark and real-world data sets verify that our suggested solutions to the existing issues on sparse and low-rank representations are considerably robust, effective, and practically applicable.1 Introduction 1 1.1 Main Challenges 4 1.2 Organization of the Dissertation 6 2 Related Work 11 2.1 Sparse Representation 11 2.2 Low-Rank Representation 14 2.2.1 Low-rank matrix approximation 14 2.2.2 Robust principal component analysis 17 2.3 Subspace Clustering 18 2.3.1 Sparse subspace clustering 18 2.3.2 Low-rank subspace clustering 20 2.3.3 Scalable subspace clustering 20 2.4 Gaussian Process Regression 21 3 Effcient Nonconvex Sparse Representation 25 3.1 Analysis of the l0-norm approximation 26 3.1.1 Notations 26 3.1.2 Desirable criteria for a nonconvex measure 27 3.1.3 A representative family of measures: SVG 29 3.2 The Proposed Nonconvex Sparsity Measure 32 3.2.1 Choosing a simple one among the SVG family 32 3.2.2 Relationships with other sparsity measures 34 3.2.3 More analysis on SVG 36 3.2.4 Learning sparse representations via SVG 38 3.3 Experimental Results 40 3.3.1 Evaluation for nonconvex sparsity measures 41 3.3.2 Low-rank approximation of matrices 42 3.3.3 Sparse coding 44 3.3.4 Subspace clustering 46 3.3.5 Parameter Analysis 49 3.4 Summary 51 4 Robust Fixed Low-Rank Representations 53 4.1 The Alternating Rectified Gradient Method for l1 Minimization 54 4.1.1 l1-ARGA as an approximation method 54 4.1.2 l1-ARGD as a dual method 65 4.1.3 Experimental results 74 4.2 Smooth Regularized Fixed-Rank Representation 88 4.2.1 Robust orthogonal matrix factorization (ROMF) 89 4.2.2 Rank estimation for ROMF (ROMF-RE) 95 4.2.3 Experimental results 98 4.3 Structured Low-Rank Representation 114 4.3.1 Kernel subspace learning 115 4.3.2 Structured kernel subspace learning in GPR 119 4.3.3 Experimental results 125 4.4 Summary 133 5 Robust Lower-Rank Subspace Representations 135 5.1 Elastic-Net Subspace Representation 136 5.2 Robust Elastic-Net Subspace Learning 140 5.2.1 Problem formulation 140 5.2.2 Algorithm: FactEN 145 5.3 Joint Subspace Estimation and Clustering 151 5.3.1 Problem formulation 151 5.3.2 Algorithm: ClustEN 152 5.4 Experiments 156 5.4.1 Subspace learning problems 157 5.4.2 Subspace clustering problems 167 5.5 Summary 174 6 Robust Group Subspace Representations 175 6.1 Group Subspace Representation 176 6.2 Group Sparse Representation (GSR) 180 6.2.1 GSR with noisy data 180 6.2.2 GSR with corrupted data 181 6.3 Group Low-Rank Representation (GLR) 184 6.3.1 GLR with noisy or corrupted data 184 6.4 Experimental Results 187 6.5 Summary 197 7 Scalable Low-Rank Subspace Clustering 199 7.1 Incremental Affnity Representation 201 7.2 End-to-End Scalable Subspace Clustering 205 7.2.1 Robust incremental summary representation 205 7.2.2 Effcient affnity construction 207 7.2.3 An end-to-end scalable learning pipeline 210 7.2.4 Nonlinear extension for SLR 213 7.3 Experimental Results 215 7.3.1 Synthetic data 216 7.3.2 Motion segmentation 219 7.3.3 Face clustering 220 7.3.4 Handwritten digits clustering 222 7.3.5 Action clustering 224 7.4 Summary 227 8 Conclusion and Future Work 229 Appendices 233 A Derivations of the LRA Problems 235 B Proof of Lemma 1 237 C Proof of Proposition 1 239 D Proof of Theorem 1 241 E Proof of Theorem 2 247 F Proof of Theorems in Chapter 6 251 F.1 Proof of Theorem 3 251 F.2 Proof of Theorem 4 252 F.3 Proof of Theorem 5 253 G Proof of Theorems in Chapter 7 255 G.1 Proof of Theorem 6 255 G.2 Proof of Theorem 7 256 Bibliography 259 초록 275Docto

    Algorithms and literate programs for weighted low-rank approximation with missing data

    No full text
    Linear models identification from data with missing values is posed as a weighted low-rank approximation problem with weights related to the missing values equal to zero. Alternating projections and variable projections methods for solving the resulting problem are outlined and implemented in a literate programming style, using Matlab/Octave's scripting language. The methods are evaluated on synthetic data and real data from the MovieLens data sets

    On statistics, computation and scalability

    Full text link
    How should statistical procedures be designed so as to be scalable computationally to the massive datasets that are increasingly the norm? When coupled with the requirement that an answer to an inferential question be delivered within a certain time budget, this question has significant repercussions for the field of statistics. With the goal of identifying "time-data tradeoffs," we investigate some of the statistical consequences of computational perspectives on scability, in particular divide-and-conquer methodology and hierarchies of convex relaxations.Comment: Published in at http://dx.doi.org/10.3150/12-BEJSP17 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    A DEIM Induced CUR Factorization

    Full text link
    We derive a CUR matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix AA, such a factorization provides a low rank approximate decomposition of the form ACURA \approx C U R, where CC and RR are subsets of the columns and rows of AA, and UU is constructed to make CURCUR a good approximation. Given a low-rank singular value decomposition AVSWTA \approx V S W^T, the DEIM procedure uses VV and WW to select the columns and rows of AA that form CC and RR. Through an error analysis applicable to a general class of CUR factorizations, we show that the accuracy tracks the optimal approximation error within a factor that depends on the conditioning of submatrices of VV and WW. For large-scale problems, VV and WW can be approximated using an incremental QR algorithm that makes one pass through AA. Numerical examples illustrate the favorable performance of the DEIM-CUR method, compared to CUR approximations based on leverage scores
    corecore