4 research outputs found

    Supervised Kernel Locally Principle Component Analysis for Face Recognition

    Get PDF
    In this paper, a novel algorithm for feature extraction, named supervised kernel locally principle component analysis (SKLPCA), is proposed. The SKLPCA is a non-linear and supervised subspace learning method, which maps the data into a potentially much higher dimension feature space by kernel trick and preserves the geometric structure of data according to prior class-label information. SKLPCA can discover the nonlinear structure of face images and enhance local within-class relations. Experimental results on ORL, Yale, CAS-PEAL and CMU PIE databases demonstrate that SKLPCA outperforms EigenFaces, LPCA and KPCA

    In-situ crystal morphology identification using imaging analysis with application to the L-glutamic acid crystallization

    Get PDF
    A synthetic image analysis strategy is proposed for in-situ crystal size measurement and shape identification for monitoring crystallization processes, based on using a real-time imaging system. The proposed method consists of image processing, feature analysis, particle sieving, crystal size measurement, and crystal shape identification. Fundamental image features of crystals are selected for efficient classification. In particular, a novel shape feature, referred to as inner distance descriptor, is introduced to quantitatively describe different crystal shapes, which is relatively independent of the crystal size and its geometric direction in an image captured for analysis. Moreover, a pixel equivalent calibration method based on subpixel edge detection and circle fitting is proposed to measure crystal sizes from the captured images. In addition, a kernel function based method is given to deal with nonlinear correlations between multiple features of crystals, facilitating computation efficiency for real-time shape identification. Case study and experimental results from the cooling crystallization of l-glutamic acid demonstrate that the proposed image analysis method can be effectively used for in-situ crystal size measurement and shape identification with good accuracy

    Discriminant analysis based feature extraction for pattern recognition

    Get PDF
    Fisher's linear discriminant analysis (FLDA) has been widely used in pattern recognition applications. However, this method cannot be applied for solving the pattern recognition problems if the within-class scatter matrix is singular, a condition that occurs when the number of the samples is small relative to the dimension of the samples. This problem is commonly known as the small sample size (SSS) problem and many of the FLDA variants proposed in the past to deal with this problem suffer from excessive computational load because of the high dimensionality of patterns or lose some useful discriminant information. This study is concerned with developing efficient techniques for discriminant analysis of patterns while at the same time overcoming the small sample size problem. With this objective in mind, the work of this research is divided into two parts. In part 1, a technique by solving the problem of generalized singular value decomposition (GSVD) through eigen-decomposition is developed for linear discriminant analysis (LDA). The resulting algorithm referred to as modified GSVD-LDA (MGSVD-LDA) algorithm is thus devoid of the singularity problem of the scatter matrices of the traditional LDA methods. A theorem enunciating certain properties of the discriminant subspace derived by the proposed GSVD-based algorithms is established. It is shown that if the samples of a dataset are linearly independent, then the samples belonging to different classes are linearly separable in the derived discriminant subspace; and thus, the proposed MGSVD-LDA algorithm effectively captures the class structure of datasets with linearly independent samples. Inspired by the results of this theorem that essentially establishes a class separability of linearly independent samples in a specific discriminant subspace, in part 2, a new systematic framework for the pattern recognition of linearly independent samples is developed. Within this framework, a discriminant model, in which the samples of the individual classes of the dataset lie on parallel hyperplanes and project to single distinct points of a discriminant subspace of the underlying input space, is shown to exist. Based on this model, a number of algorithms that are devoid of the SSS problem are developed to obtain this discriminant subspace for datasets with linearly independent samples. For the discriminant analysis of datasets for which the samples are not linearly independent, some of the linear algorithms developed in this thesis are also kernelized. Extensive experiments are conducted throughout this investigation in order to demonstrate the validity and effectiveness of the ideas developed in this study. It is shown through simulation results that the linear and nonlinear algorithms for discriminant analysis developed in this thesis provide superior performance in terms of the recognition accuracy and computational complexit
    corecore