3 research outputs found

    Multi-Criteria in Discriminant Analysis to Find the Dominant Features

    Get PDF
    A crucial problem in biometrics is enormous dimensionality. It will have an impact on the costs involved. Therefore, the feature extraction plays a significant role in biometrics computational. In this research, a novel approach to extract the features is proposed for facial image recognition. Four criteria of the Discriminant Analysis have been modeled to find the dominant features. For each criterion is an objective function, it was derived to obtain the optimum values. The optimum values can be solved by using generalized the Eigenvalue problem associated to the largest Eigenvalue. The modeling results were employed to recognize the facial image by the multi-criteria projection to the original data. The training sets were also processed by using the Eigenface projection to avoid the singularity problem cases. The similarity measurements were performed by using four different methods, i.e. Euclidian Distance, Manhattan, Chebyshev, and Canberra.  Feature extraction and analysis results using multi-criteria have shown better results than the other appearance method, i.e. Eigenface (PCA), Fisherface (Linear Discriminant Analysis or LDA), Laplacianfaces (Locality Preserving Projection or LPP), and Orthogonal Laplacianfaces (Orthogonal Locality Preserving Projection or O-LPP).

    A Novel Hybrid Dimensionality Reduction Method using Support Vector Machines and Independent Component Analysis

    Get PDF
    Due to the increasing demand for high dimensional data analysis from various applications such as electrocardiogram signal analysis and gene expression analysis for cancer detection, dimensionality reduction becomes a viable process to extracts essential information from data such that the high-dimensional data can be represented in a more condensed form with much lower dimensionality to both improve classification accuracy and reduce computational complexity. Conventional dimensionality reduction methods can be categorized into stand-alone and hybrid approaches. The stand-alone method utilizes a single criterion from either supervised or unsupervised perspective. On the other hand, the hybrid method integrates both criteria. Compared with a variety of stand-alone dimensionality reduction methods, the hybrid approach is promising as it takes advantage of both the supervised criterion for better classification accuracy and the unsupervised criterion for better data representation, simultaneously. However, several issues always exist that challenge the efficiency of the hybrid approach, including (1) the difficulty in finding a subspace that seamlessly integrates both criteria in a single hybrid framework, (2) the robustness of the performance regarding noisy data, and (3) nonlinear data representation capability. This dissertation presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) from Support Vector Machine (SVM) and data independence (unsupervised criterion) from Independent Component Analysis (ICA). The projection from SVM directly contributes to classification performance improvement in a supervised perspective whereas maximum independence among features by ICA construct projection indirectly achieving classification accuracy improvement due to better intrinsic data representation in an unsupervised perspective. For linear dimensionality reduction model, I introduce orthogonality to interrelate both projections from SVM and ICA while redundancy removal process eliminates a part of the projection vectors from SVM, leading to more effective dimensionality reduction. The orthogonality-based linear hybrid dimensionality reduction method is extended to uncorrelatedness-based algorithm with nonlinear data representation capability. In the proposed approach, SVM and ICA are integrated into a single framework by the uncorrelated subspace based on kernel implementation. Experimental results show that the proposed approaches give higher classification performance with better robustness in relatively lower dimensions than conventional methods for high-dimensional datasets
    corecore