73 research outputs found

    Integrating joint feature selection into subspace learning: A formulation of 2DPCA for outliers robust feature selection

    Full text link
    © 2019 Elsevier Ltd Since the principal component analysis and its variants are sensitive to outliers that affect their performance and applicability in real world, several variants have been proposed to improve the robustness. However, most of the existing methods are still sensitive to outliers and are unable to select useful features. To overcome the issue of sensitivity of PCA against outliers, in this paper, we introduce two-dimensional outliers-robust principal component analysis (ORPCA) by imposing the joint constraints on the objective function. ORPCA relaxes the orthogonal constraints and penalizes the regression coefficient, thus, it selects important features and ignores the same features that exist in other principal components. It is commonly known that square Frobenius norm is sensitive to outliers. To overcome this issue, we have devised an alternative way to derive objective function. Experimental results on four publicly available benchmark datasets show the effectiveness of joint feature selection and provide better performance as compared to state-of-the-art dimensionality-reduction methods

    Robust 2D Joint Sparse Principal Component Analysis with F-Norm Minimization for Sparse Modelling: 2D-RJSPCA

    Full text link
    © 2018 IEEE. Principal component analysis (PCA) is widely used methods for dimensionality reduction and Lots of variants have been proposed to improve the robustness of algorithm, however, these methods suffer from the fact that PCA is linear combination which makes it difficult to interpret complex nonlinear data, and sensitive to outliers or cannot extract features consistently, i.e., collectively; PCA may still require measuring all input features. 2DPCA based on 1-norm has been recently used for robust dimensionality reduction in the image domain but still sensitive to noise. In this paper, we introduce robust formation of 2DPCA by centering the data using the optimized mean for two-dimensional joint sparse as well as effectively combining the robustness of 2DPCA and the sparsity-inducing lasso regularization. Optimal mean helps to improve the robustness of joint sparse PCA further. The distance in spatial dimension is measure in F-norm and sum of different datapoint uses 1-norm. 2DR-JSPCA imposes joint sparse constraints on its objective function whereas additional plenty term help to deal with outliers efficiently. Both theoretical and empirical results on six publicly available benchmark datasets shows that Optimal mean 2DR-JSPCA provides better performance for dimensionality reduction as compare to non-sparse (2DPCA and 2DPCA-L1) and sparse (SPCA, JSPCA)

    Generalized Two-Dimensional Quaternion Principal Component Analysis with Weighting for Color Image Recognition

    Full text link
    A generalized two-dimensional quaternion principal component analysis (G2DQPCA) approach with weighting is presented for color image analysis. As a general framework of 2DQPCA, G2DQPCA is flexible to adapt different constraints or requirements by imposing LpL_{p} norms both on the constraint function and the objective function. The gradient operator of quaternion vector functions is redefined by the structure-preserving gradient operator of real vector function. Under the framework of minorization-maximization (MM), an iterative algorithm is developed to obtain the optimal closed-form solution of G2DQPCA. The projection vectors generated by the deflating scheme are required to be orthogonal to each other. A weighting matrix is defined to magnify the effect of main features. The weighted projection bases remain the accuracy of face recognition unchanged or moving in a tight range as the number of features increases. The numerical results based on the real face databases validate that the newly proposed method performs better than the state-of-the-art algorithms.Comment: 15 pages, 15 figure

    Evaluation of face recognition algorithms under noise

    Get PDF
    One of the major applications of computer vision and image processing is face recognition, where a computerized algorithm automatically identifies a person’s face from a large image dataset or even from a live video. This thesis addresses facial recognition, a topic that has been widely studied due to its importance in many applications in both civilian and military domains. The application of face recognition systems has expanded from security purposes to social networking sites, managing fraud, and improving user experience. Numerous algorithms have been designed to perform face recognition with good accuracy. This problem is challenging due to the dynamic nature of the human face and the different poses that it can take. Regardless of the algorithm, facial recognition accuracy can be heavily affected by the presence of noise. This thesis presents a comparison of traditional and deep learning face recognition algorithms under the presence of noise. For this purpose, Gaussian and salt-andpepper noises are applied to the face images drawn from the ORL Dataset. The image recognition is performed using each of the following eight algorithms: principal component analysis (PCA), two-dimensional PCA (2D-PCA), linear discriminant analysis (LDA), independent component analysis (ICA), discrete cosine transform (DCT), support vector machine (SVM), convolution neural network (CNN) and Alex Net. The ORL dataset was used in the experiments to calculate the evaluation accuracy for each of the investigated algorithms. Each algorithm is evaluated with two experiments; in the first experiment only one image per person is used for training, whereas in the second experiment, five images per person are used for training. The investigated traditional algorithms are implemented with MATLAB and the deep learning algorithms approaches are implemented with Python. The results show that the best performance was obtained using the DCT algorithm with 92% dominant eigenvalues and 95.25 % accuracy, whereas for deep learning, the best performance was using a CNN with accuracy of 97.95%, which makes it the best choice under noisy conditions

    Approximation of Images via Generalized Higher Order Singular Value Decomposition over Finite-dimensional Commutative Semisimple Algebra

    Full text link
    Low-rank approximation of images via singular value decomposition is well-received in the era of big data. However, singular value decomposition (SVD) is only for order-two data, i.e., matrices. It is necessary to flatten a higher order input into a matrix or break it into a series of order-two slices to tackle higher order data such as multispectral images and videos with the SVD. Higher order singular value decomposition (HOSVD) extends the SVD and can approximate higher order data using sums of a few rank-one components. We consider the problem of generalizing HOSVD over a finite dimensional commutative algebra. This algebra, referred to as a t-algebra, generalizes the field of complex numbers. The elements of the algebra, called t-scalars, are fix-sized arrays of complex numbers. One can generalize matrices and tensors over t-scalars and then extend many canonical matrix and tensor algorithms, including HOSVD, to obtain higher-performance versions. The generalization of HOSVD is called THOSVD. Its performance of approximating multi-way data can be further improved by an alternating algorithm. THOSVD also unifies a wide range of principal component analysis algorithms. To exploit the potential of generalized algorithms using t-scalars for approximating images, we use a pixel neighborhood strategy to convert each pixel to "deeper-order" t-scalar. Experiments on publicly available images show that the generalized algorithm over t-scalars, namely THOSVD, compares favorably with its canonical counterparts.Comment: 20 pages, several typos corrected, one appendix adde

    Discriminant Analysis via Joint Euler Transform and â„“2, 1-Norm

    Get PDF
    Linear discriminant analysis (LDA) has been widely used for face recognition. However, when identifying faces in the wild, the existence of outliers that deviate significantly from the rest of the data can arbitrarily skew the desired solution. This usually deteriorates LDA’s performance dramatically, thus preventing it from mass deployment in real-world applications. To handle this problem, we propose an effective distance metric learning method-based LDA, namely, Euler LDA-L21 (e-LDA-L21). e-LDA-L21 is carried out in two stages, in which each image is mapped into a complex space by Euler transform in the first stage and the ℓ2,1 -norm is adopted as the distance metric in the second stage. This not only reveals nonlinear features but also exploits the geometric structure of data. To solve e-LDA-L21 efficiently, we propose an iterative algorithm, which is a closed-form solution at each iteration with convergence guaranteed. Finally, we extend e-LDA-L21 to Euler 2DLDA-L21 (e-2DLDA-L21) which further exploits the spatial information embedded in image pixels. Experimental results on several face databases demonstrate its superiority over the state-of-the-art algorithms
    • …
    corecore