3 research outputs found

    Robust Recognition using L1-Principal Component Analysis

    Get PDF
    The wide availability of visual data via social media and the internet, coupled with the demands of the security community have led to an increased interest in visual recognition. Recent research has focused on improving the accuracy of recognition techniques in environments where variability is well controlled. However, applications such as identity verification often operate in unconstrained environments. Therefore there is a need for more robust recognition techniques that can operate on data with considerable noise. Many statistical recognition techniques rely on principal component analysis (PCA). However, PCA suffers from the presence of outliers due to occlusions and noise often encountered in unconstrained settings. In this thesis we address this problem by using L1-PCA to minimize the effect of outliers in data. L1-PCA is applied to several statistical recognition techniques including eigenfaces and Grassmannian learning. Several popular face databases are used to show that L1-Grassmann manifolds not only outperform, but are also more robust to noise and occlusions than traditional L2-Grassmann manifolds for face and facial expression recognition. Additionally a high performance GPU implementation of L1-PCA is developed using CUDA that is several times faster than CPU implementations

    Some options for L1-subspace signal processing

    No full text
    Summarization: We describe ways to define and calculate L1-norm signal subspaces which are less sensitive to outlying data than L2-calculated subspaces. We focus on the computation of the L1 maximum-projection principal component of a data matrix containing N signal samples of dimension D and conclude that the general problem is formally NP-hard in asymptotically large N, D. We prove, however, that the case of engineering interest of fixed dimension D and asymptotically large sample support N is not and we present an optimal algorithm of complexity O(N(exp D)). We generalize to multiple L1-max-projection components and present an explicit optimal L1 subspace calculation algorithm in the form of matrix nuclear-norm evaluations. We conclude with illustrations of L1-subspace signal processing in the fields of data dimensionality reduction and direction-of-arrival estimationΠαρουσιάστηκε στο: International Symposium on Wireless Communication System
    corecore