1,587 research outputs found

    Robust Tensor Analysis with Non-Greedy L1-Norm Maximization

    Get PDF
    The L1-norm based tensor analysis (TPCA-L1) is recently proposed for dimensionality reduction and feature extraction. However, a greedy strategy was utilized for solving the L1-norm maximization problem, which makes it prone to being stuck in local solutions. In this paper, we propose a robust TPCA with non-greedy L1-norm maximization (TPCA-L1 non-greedy), in which all projection directions are optimized simultaneously. Experiments on several face databases demonstrate the effectiveness of the proposed method

    Graph Scaling Cut with L1-Norm for Classification of Hyperspectral Images

    Full text link
    In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hyperspectral images, called as L1-Scaling Cut (L1-SC). The underlying idea of this method is to generate the optimal projection matrix by retaining the original distribution of the data. Though L2-norm is generally preferred for computation, it is sensitive to noise and outliers. However, L1-norm is robust to them. Therefore, we obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using L1-norm. Furthermore, an iterative algorithm is described to solve the optimization problem. The experimental results of the HSI classification confirm the effectiveness of the proposed L1-SC method on both noisy and noiseless data.Comment: European Signal Processing Conference 201

    Iteratively Reweighted Least Squares Algorithms for L1-Norm Principal Component Analysis

    Full text link
    Principal component analysis (PCA) is often used to reduce the dimension of data by selecting a few orthonormal vectors that explain most of the variance structure of the data. L1 PCA uses the L1 norm to measure error, whereas the conventional PCA uses the L2 norm. For the L1 PCA problem minimizing the fitting error of the reconstructed data, we propose an exact reweighted and an approximate algorithm based on iteratively reweighted least squares. We provide convergence analyses, and compare their performance against benchmark algorithms in the literature. The computational experiment shows that the proposed algorithms consistently perform best

    Robust L1-norm Singular-Value Decomposition and Estimation

    Get PDF
    Singular-Value Decomposition (SVD) is a ubiquitous data analysis method in engineering, science, and statistics. Singular-value estimation, in particular, is of critical importance in an array of engineering applications, such as channel estimation in communication systems, EMG signal analysis, and image compression, to name just a few. Conventional SVD of a data matrix coincides with standard Principal-Component Analysis (PCA). The L2-norm (sum of squared values) formulation of PCA promotes peripheral data points and, thus, makes PCA sensitive against outliers. Naturally, SVD inherits this outlier sensitivity. In this work, we present a novel robust method for SVD based on a L1-norm (sum of absolute values) formulation, namely L1-norm compact Singular-Value Decomposition (L1-cSVD). We then propose a closed-form algorithm to solve this problem and find the robust singular values with cost O(N3K2)\mathcal{O}(N^3K^2). Accordingly, the proposed method demonstrates sturdy resistance against outliers, especially for singular values estimation, and can facilitate more reliable data analysis and processing in a wide range of engineering applications
    corecore