69 research outputs found

    Human Hand Recognition Using IPCA-ICA Algorithm

    Get PDF

    Clustering by non-negative matrix factorization with independent principal component initialization

    Get PDF
    Non negative matrix factorization (NMF) is a dimensionality reduction and clustering method, and has been applied to many areas such as bioinformatics, face images classification, and so on. Based on the traditional NMF, researchers recently have put forward several new algorithms on the initialization area to improve its performance. In this paper, we explore the clustering performance of the NMF algorithm, with emphasis on the initialization problem. We propose an initialization method based on independent principal component analysis (IPCA) for NMF. The experiments were carried out on the four real datasets and the results showed that the IPCA-based initialization of NMF gets better clustering of the datasets compared with both random and PCA-based initializations

    2D Face Recognition System Based on Selected Gabor Filters and Linear Discriminant Analysis LDA

    Full text link
    We present a new approach for face recognition system. The method is based on 2D face image features using subset of non-correlated and Orthogonal Gabor Filters instead of using the whole Gabor Filter Bank, then compressing the output feature vector using Linear Discriminant Analysis (LDA). The face image has been enhanced using multi stage image processing technique to normalize it and compensate for illumination variation. Experimental results show that the proposed system is effective for both dimension reduction and good recognition performance when compared to the complete Gabor filter bank. The system has been tested using CASIA, ORL and Cropped YaleB 2D face images Databases and achieved average recognition rate of 98.9 %

    Image encoding by independent principal components

    Get PDF
    The encoding of images by semantic entities is still an unresolved task. This paper proposes the encoding of images by only a few important components or image primitives. Classically, this can be done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the signal processing and neural network community. Using this as pattern primitives we aim for source patterns with the highest occurrence probability or highest information. For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that the Independent Principal Components (IPC) in contrast to the Principal Independent Components (PIC) implement the classical demand of Shannon’s rate distortion theory

    Recognition capacity of biometric-based systems

    Get PDF
    Performance of biometrics-based recognition systems depends on various factors: database quality, image preprocessing, encoding techniques, etc. Given a biometric database and a selected encoding method, the capability of a recognition system is limited by the relationship between the number of classes that the recognition system can encode and the length of encoded data describing the template at a specific level of distortion. In this work, we evaluate constrained recognition capacity of biometric systems under the constraint of two global encoding techniques: Principal Component Analysis and Independent Component Analysis. The developed methodology is applied to predict capacity of different recognition channels formed during acquisition of different iris and face databases. The proposed approach relies on data modeling and involves classical detection and information theories. The major contribution is in providing a guideline on how to evaluate capabilities of large-scale biometric recognition systems in practice. Recognition capacity can also be promoted as a global quality measure of biometric databases

    Nonnegative matrix analysis for data clustering and compression

    Get PDF
    Nonnegative matrix factorization (NMF) has becoming an increasingly popular data processing tool these years, widely used by various communities including computer vision, text mining and bioinformatics. It is able to approximate each data sample in a data collection by a linear combination of a set of nonnegative basis vectors weighted by nonnegative weights. This often enables meaningful interpretation of the data, motivates useful insights and facilitates tasks such as data compression, clustering and classification. These subsequently lead to various active roles of NMF in data analysis, e.g., dimensionality reduction tool [11, 75], clustering tool[94, 82, 13, 39], feature engine [40], source separation tool [38], etc. Different methods based on NMF are proposed in this thesis: The modification of k- means clustering is chosen as one of the initialisation methods for NMF. Experimental results demonstrate the excellence of this method with improved compression performance. Independent principal component analysis (IPCA) which combines the advantage of both principal component analysis (PCA) and independent component analysis (ICA) has been chosen as the significant initialisation method for NMF with improved clustering accuracy. We have proposed the new evolutionary optimization strategy for NMF driven by three proposed update schemes in the solution space, saying NMF rule (or original movement), firefly rule (or beta movement) and survival of the fittest rule (or best movement). This proposed update strategy facilitates both the clustering and compression problems by using the different system objective functions that make use of the clustering and compression quality measurements. A hybrid initialisation approach is used by including the state-of-the-art NMF initialization methods as seed knowledge to increase the rate of convergence. There is no limitation for the number and the type of the initialization methods used for the proposed optimisation approach. Numerous computer experiments using the benchmark datasets verify the theoretical results, make comparisons among the techniques in measures of clustering/compression accuracy. Experimental results demonstrate the excellence of these methods with im- proved clustering/compression performance. In the application of EEG dataset, we employed several standard algorithms to provide clustering on preprocessed EEG data. We also explored ensemble clustering to obtain some tight clusters. We can make some statements based on the results we have got: firstly, normalization is necessary for this EEG brain dataset to obtain reasonable clustering; secondly, k-means, k-medoids and HC-Ward provide relatively better clustering results; thirdly, ensemble clustering enables us to tune the tightness of the clusters so that the research can be focused
    • …
    corecore