13,735 research outputs found

    Approximated RPCA for fast and efficient recovery of corrupted and linearly correlated images and video frames

    Get PDF
    This paper presents an approximated Robust Principal Component Analysis (ARPCA) framework for recovery of a set of linearly correlated images. Our algorithm seeks an optimal solution for decomposing a batch of realistic unaligned and corrupted images as the sum of a low-rank and a sparse corruption matrix, while simultaneously aligning the images according to the optimal image transformations. This extremely challenging optimization problem has been reduced to solving a number of convex programs, that minimize the sum of Frobenius norm and the l1-norm of the mentioned matrices, with guaranteed faster convergence than the state-of-the-art algorithms. The efficacy of the proposed method is verified with extensive experiments with real and synthetic data

    Robust 2D Joint Sparse Principal Component Analysis with F-Norm Minimization for Sparse Modelling: 2D-RJSPCA

    Full text link
    © 2018 IEEE. Principal component analysis (PCA) is widely used methods for dimensionality reduction and Lots of variants have been proposed to improve the robustness of algorithm, however, these methods suffer from the fact that PCA is linear combination which makes it difficult to interpret complex nonlinear data, and sensitive to outliers or cannot extract features consistently, i.e., collectively; PCA may still require measuring all input features. 2DPCA based on 1-norm has been recently used for robust dimensionality reduction in the image domain but still sensitive to noise. In this paper, we introduce robust formation of 2DPCA by centering the data using the optimized mean for two-dimensional joint sparse as well as effectively combining the robustness of 2DPCA and the sparsity-inducing lasso regularization. Optimal mean helps to improve the robustness of joint sparse PCA further. The distance in spatial dimension is measure in F-norm and sum of different datapoint uses 1-norm. 2DR-JSPCA imposes joint sparse constraints on its objective function whereas additional plenty term help to deal with outliers efficiently. Both theoretical and empirical results on six publicly available benchmark datasets shows that Optimal mean 2DR-JSPCA provides better performance for dimensionality reduction as compare to non-sparse (2DPCA and 2DPCA-L1) and sparse (SPCA, JSPCA)

    Relaxed 2-D Principal Component Analysis by LpL_p Norm for Face Recognition

    Full text link
    A relaxed two dimensional principal component analysis (R2DPCA) approach is proposed for face recognition. Different to the 2DPCA, 2DPCA-L1L_1 and G2DPCA, the R2DPCA utilizes the label information (if known) of training samples to calculate a relaxation vector and presents a weight to each subset of training data. A new relaxed scatter matrix is defined and the computed projection axes are able to increase the accuracy of face recognition. The optimal LpL_p-norms are selected in a reasonable range. Numerical experiments on practical face databased indicate that the R2DPCA has high generalization ability and can achieve a higher recognition rate than state-of-the-art methods.Comment: 19 pages, 11 figure

    Conditional Gradient Algorithms for Rank-One Matrix Approximations with a Sparsity Constraint

    Full text link
    The sparsity constrained rank-one matrix approximation problem is a difficult mathematical optimization problem which arises in a wide array of useful applications in engineering, machine learning and statistics, and the design of algorithms for this problem has attracted intensive research activities. We introduce an algorithmic framework, called ConGradU, that unifies a variety of seemingly different algorithms that have been derived from disparate approaches, and allows for deriving new schemes. Building on the old and well-known conditional gradient algorithm, ConGradU is a simplified version with unit step size and yields a generic algorithm which either is given by an analytic formula or requires a very low computational complexity. Mathematical properties are systematically developed and numerical experiments are given.Comment: Minor changes. Final version. To appear in SIAM Revie

    Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes

    Full text link
    Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations. In this paper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ two norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1), which are used in two different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. We show the the AM method is nontrivially equivalent to GPower (Journ\'{e}e et al; JMLR 11:517--553, 2010) for all our formulations. Besides this, we provide 24 efficient parallel SPCA implementations: 3 codes (multi-core, GPU and cluster) for each of the 8 problems. Parallelism in the methods is aimed at i) speeding up computations (our GPU code can be 100 times faster than an efficient serial code written in C++), ii) obtaining solutions explaining more variance and iii) dealing with big data problems (our cluster code is able to solve a 357 GB problem in about a minute).Comment: 29 pages, 9 tables, 7 figures (the paper is accompanied by a release of the open-source code '24am'

    Covariance Eigenvector Sparsity for Compression and Denoising

    Full text link
    Sparsity in the eigenvectors of signal covariance matrices is exploited in this paper for compression and denoising. Dimensionality reduction (DR) and quantization modules present in many practical compression schemes such as transform codecs, are designed to capitalize on this form of sparsity and achieve improved reconstruction performance compared to existing sparsity-agnostic codecs. Using training data that may be noisy a novel sparsity-aware linear DR scheme is developed to fully exploit sparsity in the covariance eigenvectors and form noise-resilient estimates of the principal covariance eigenbasis. Sparsity is effected via norm-one regularization, and the associated minimization problems are solved using computationally efficient coordinate descent iterations. The resulting eigenspace estimator is shown capable of identifying a subset of the unknown support of the eigenspace basis vectors even when the observation noise covariance matrix is unknown, as long as the noise power is sufficiently low. It is proved that the sparsity-aware estimator is asymptotically normal, and the probability to correctly identify the signal subspace basis support approaches one, as the number of training data grows large. Simulations using synthetic data and images, corroborate that the proposed algorithms achieve improved reconstruction quality relative to alternatives.Comment: IEEE Transcations on Signal Processing, 2012 (to appear

    Robust Rotation Synchronization via Low-rank and Sparse Matrix Decomposition

    Get PDF
    This paper deals with the rotation synchronization problem, which arises in global registration of 3D point-sets and in structure from motion. The problem is formulated in an unprecedented way as a "low-rank and sparse" matrix decomposition that handles both outliers and missing data. A minimization strategy, dubbed R-GoDec, is also proposed and evaluated experimentally against state-of-the-art algorithms on simulated and real data. The results show that R-GoDec is the fastest among the robust algorithms.Comment: The material contained in this paper is part of a manuscript submitted to CVI
    corecore