27 research outputs found
Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis
Fisher's linear discriminant analysis (FLDA) is an important dimension
reduction method in statistical pattern recognition. It has been shown that
FLDA is asymptotically Bayes optimal under the homoscedastic Gaussian
assumption. However, this classical result has the following two major
limitations: 1) it holds only for a fixed dimensionality , and thus does not
apply when and the training sample size are proportionally large; 2) it
does not provide a quantitative description on how the generalization ability
of FLDA is affected by and . In this paper, we present an asymptotic
generalization analysis of FLDA based on random matrix theory, in a setting
where both and increase and . The
obtained lower bound of the generalization discrimination power overcomes both
limitations of the classical result, i.e., it is applicable when and
are proportionally large and provides a quantitative description of the
generalization ability of FLDA in terms of the ratio and the
population discrimination power. Besides, the discrimination power bound also
leads to an upper bound on the generalization error of binary-classification
with FLDA
Discriminative Hessian Eigenmaps for face recognition
Dimension reduction algorithms have attracted a lot of attentions in face recognition because they can select a subset of effective and efficient discriminative features in the face images. Most of dimension reduction algorithms can not well model both the intra-class geometry and interclass discrimination simultaneously. In this paper, we introduce the Discriminative Hessian Eigenmaps (DHE), a novel dimension reduction algorithm to address this problem. DHE will consider encoding the geometric and discriminative information in a local patch by improved Hessian Eigenmaps and margin maximization respectively. Empirical studies on public face database thoroughly demonstrate that DHE is superior to popular algorithms for dimension reduction, e.g., FLDA, LPP, MFA and DLA. Ā©2010 IEEE.published_or_final_versionThe 2010 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Dallas, TX., 14-19 March 2010. In IEEE International Conference on Acoustics, Speech and Signal Processing Proceedings, 2010, p. 5586-558
Improving acoustic vehicle classification by information fusion
We present an information fusion approach for ground vehicle classification based on the emitted acoustic signal. Many acoustic factors can contribute to the classification accuracy of working ground vehicles. Classification relying on a single feature set may lose some useful information if its underlying sound production model is not comprehensive. To improve classification accuracy, we consider an information fusion diagram, in which various aspects of an acoustic signature are taken into account and emphasized separately by two different feature extraction methods. The first set of features aims to represent internal sound production, and a number of harmonic components are extracted to characterize the factors related to the vehicleās resonance. The second set of features is extracted based on a computationally effective discriminatory analysis, and a group of key frequency components are selected by mutual information, accounting for the sound production from the vehicleās exterior parts. In correspondence with this structure, we further put forward a modifiedBayesian fusion algorithm, which takes advantage of matching each specific feature set with its favored classifier. To assess the proposed approach, experiments are carried out based on a data set containing acoustic signals from different types of vehicles. Results indicate that the fusion approach can effectively increase classification accuracy compared to that achieved using each individual features set alone. The Bayesian-based decision level fusion is found fusion is found to be improved than a feature level fusion approac
Worst-Case Linear Discriminant Analysis as Scalable Semidefinite Feasibility Problems
In this paper, we propose an efficient semidefinite programming (SDP)
approach to worst-case linear discriminant analysis (WLDA). Compared with the
traditional LDA, WLDA considers the dimensionality reduction problem from the
worst-case viewpoint, which is in general more robust for classification.
However, the original problem of WLDA is non-convex and difficult to optimize.
In this paper, we reformulate the optimization problem of WLDA into a sequence
of semidefinite feasibility problems. To efficiently solve the semidefinite
feasibility problems, we design a new scalable optimization method with
quasi-Newton methods and eigen-decomposition being the core components. The
proposed method is orders of magnitude faster than standard interior-point
based SDP solvers.
Experiments on a variety of classification problems demonstrate that our
approach achieves better performance than standard LDA. Our method is also much
faster and more scalable than standard interior-point SDP solvers based WLDA.
The computational complexity for an SDP with constraints and matrices of
size by is roughly reduced from to
( in our case).Comment: 14 page
Discrimination on the Grassmann Manifold: Fundamental Limits of Subspace Classifiers
We present fundamental limits on the reliable classification of linear and
affine subspaces from noisy, linear features. Drawing an analogy between
discrimination among subspaces and communication over vector wireless channels,
we propose two Shannon-inspired measures to characterize asymptotic classifier
performance. First, we define the classification capacity, which characterizes
necessary and sufficient conditions for the misclassification probability to
vanish as the signal dimension, the number of features, and the number of
subspaces to be discerned all approach infinity. Second, we define the
diversity-discrimination tradeoff which, by analogy with the
diversity-multiplexing tradeoff of fading vector channels, characterizes
relationships between the number of discernible subspaces and the
misclassification probability as the noise power approaches zero. We derive
upper and lower bounds on these measures which are tight in many regimes.
Numerical results, including a face recognition application, validate the
results in practice.Comment: 19 pages, 4 figures. Revised submission to IEEE Transactions on
Information Theor
Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction
It is difficult to find the optimal sparse solution of a manifold learning
based dimensionality reduction algorithm. The lasso or the elastic net
penalized manifold learning based dimensionality reduction is not directly a
lasso penalized least square problem and thus the least angle regression (LARS)
(Efron et al. \cite{LARS}), one of the most popular algorithms in sparse
learning, cannot be applied. Therefore, most current approaches take indirect
ways or have strict settings, which can be inconvenient for applications. In
this paper, we proposed the manifold elastic net or MEN for short. MEN
incorporates the merits of both the manifold learning based dimensionality
reduction and the sparse learning based dimensionality reduction. By using a
series of equivalent transformations, we show MEN is equivalent to the lasso
penalized least square problem and thus LARS is adopted to obtain the optimal
sparse solution of MEN. In particular, MEN has the following advantages for
subsequent classification: 1) the local geometry of samples is well preserved
for low dimensional data representation, 2) both the margin maximization and
the classification error minimization are considered for sparse projection
calculation, 3) the projection matrix of MEN improves the parsimony in
computation, 4) the elastic net penalty reduces the over-fitting problem, and
5) the projection matrix of MEN can be interpreted psychologically and
physiologically. Experimental evidence on face recognition over various popular
datasets suggests that MEN is superior to top level dimensionality reduction
algorithms.Comment: 33 pages, 12 figure
The new method of Extraction and Analysis of Non-linear Features for face recognition
In this paper, we introduce the new method of Extraction and Analysis of Non-linear Features (EANF) for face recognition based on extraction and analysis of nonlinear features i.e. Locality Preserving Analysis. In our proposed algorithm, EANF removes disadvantages such as the length of search space, different sizes and qualities of imagees due to various conditions of imaging time that has led to problems in the previous algorithms and removes the disadvantages of ELPDA methods (local neighborhood separator analysis) using the Scatter matrix in the form of a between-class scatter that this matrix introduces and displayes the nearest neighbors to K of the outer class by the samples. In addition, another advantage of EANF is high-speed in the face recognition through miniaturizing the size of feature matrix by NLPCA (Non-Linear Locality Preserving Analysis). Finally, the results of tests on FERET Dataset show the impact of the proposed method on the face recognition.DOI:http://dx.doi.org/10.11591/ijece.v2i6.177