4,569 research outputs found
Quadratic Projection Based Feature Extraction with Its Application to Biometric Recognition
This paper presents a novel quadratic projection based feature extraction
framework, where a set of quadratic matrices is learned to distinguish each
class from all other classes. We formulate quadratic matrix learning (QML) as a
standard semidefinite programming (SDP) problem. However, the con- ventional
interior-point SDP solvers do not scale well to the problem of QML for
high-dimensional data. To solve the scalability of QML, we develop an efficient
algorithm, termed DualQML, based on the Lagrange duality theory, to extract
nonlinear features. To evaluate the feasibility and effectiveness of the
proposed framework, we conduct extensive experiments on biometric recognition.
Experimental results on three representative biometric recogni- tion tasks,
including face, palmprint, and ear recognition, demonstrate the superiority of
the DualQML-based feature extraction algorithm compared to the current
state-of-the-art algorithm
Dimension Reduction by Mutual Information Discriminant Analysis
In the past few decades, researchers have proposed many discriminant analysis
(DA) algorithms for the study of high-dimensional data in a variety of
problems. Most DA algorithms for feature extraction are based on
transformations that simultaneously maximize the between-class scatter and
minimize the withinclass scatter matrices. This paper presents a novel DA
algorithm for feature extraction using mutual information (MI). However, it is
not always easy to obtain an accurate estimation for high-dimensional MI. In
this paper, we propose an efficient method for feature extraction that is based
on one-dimensional MI estimations. We will refer to this algorithm as mutual
information discriminant analysis (MIDA). The performance of this proposed
method was evaluated using UCI databases. The results indicate that MIDA
provides robust performance over different data sets with different
characteristics and that MIDA always performs better than, or at least
comparable to, the best performing algorithms.Comment: 13pages, 3 tables, International Journal of Artificial Intelligence &
Application
Sparse multinomial kernel discriminant analysis (sMKDA)
Dimensionality reduction via canonical variate analysis (CVA) is important for pattern recognition and has been extended variously to permit more flexibility, e.g. by "kernelizing" the formulation. This can lead to over-fitting, usually ameliorated by regularization. Here, a method for sparse, multinomial kernel discriminant analysis (sMKDA) is proposed, using a sparse basis to control complexity. It is based on the connection between CVA and least-squares, and uses forward selection via orthogonal least-squares to approximate a basis, generalizing a similar approach for binomial problems. Classification can be performed directly via minimum Mahalanobis distance in the canonical variates. sMKDA achieves state-of-the-art performance in terms of accuracy and sparseness on 11 benchmark datasets
- …