73 research outputs found
Fast cross-validation of kernel fisher discriminant classifiers
Given n training examples, the training of a Kernel Fisher Discriminant (KFD) classifier corresponds to solving a linear system of dimension n. In cross-validating KFD, the training examples are split into 2 distinct subsets for a number of times (L) wherein a subset of m examples is used for validation and the other subset of(n - m) examples is used for training the classifier. In this case L linear systems of dimension (n - m) need to be solved. We propose a novel method for cross-validation of KFD in which instead of solving L linear systems of dimension (n - m), we compute the inverse of an n × n matrix and solve L linear systems of dimension 2m, thereby reducing the complexity when L is large and/or m is small. For typical 10-fold and leave-one-out cross-validations, the proposed algorithm is approximately 4 and (4/9n) times respectively as efficient as the naive implementations. Simulations are provided to demonstrate the efficiency of the proposed algorithms.<br /
Face recognition based on ordinal correlation approach
In this paper, we propose a new face recognition system based on the ordinal correlation principle. First, we will explain the ordinal similarity measure for any two images and then propose a systematic approach for face recognition based on this ordinal measure. In addition, we will design an algorithm for selecting a suitable classification threshold via using the information obtained from the training database. Finally, experimentation is conducted on the Yale datasets and the results show that the proposed face recognition approach outperforms the Eigenface and 2DPCA approaches significantly and also the threshold selection algorithm works effectively. 1
A fast kernel dimension reduction algorithm with applications to face recognition
This paper presents a novel dimensionality reduction algorithm for kernel based classification. In the feature space, the proposed algorithm maximizes the ratio of the squared between-class distance and the sum of the within-class variances of the training samples for a given reduced dimension. This algorithm has lower complexity than the recently reported kernel dimension reduction(KDR) for supervised learning. We conducted several simulations with large training datasets, which demonstrate that the proposed algorithm has similar performance or is marginally better compared with KDR whilst having the advantage of computational efficiency. Further, we applied the proposed dimension reduction algorithm to face recognition in which the number of training samples is very small. This proposed face recognition approach based on the new algorithm outperforms the eigenface approach based on the principle component analysis (PCA), when the training data is complete, that is, representative of the whole dataset.<br /
Efficient algorithms for subwindow search in object detection and localization
Recently, a simple yet powerful branch-and-bound method called Efficient Subwindow Search (ESS) was developed to speed up sliding window search in object detection. A major drawback of ESS is that its computational complexity varies widely from O(n2) to O(n4) for n × n matrices. Our experimental experience shows that the ESS\u27s performance is highly related to the optimal confidence levels which indicate the probability of the object\u27s presence. In particular, when the object is not in the image, the optimal subwindow scores low and ESS may take a large amount of iterations to converge to the optimal solution and so perform very slow. Addressing this problem, we present two significantly faster methods based on the linear-time Kadane\u27s Algorithm for 1D maximum subarray search. The first algorithm is a novel, computationally superior branchand- bound method where the worst case complexity is reduced to O(n3). Experiments on the PASCAL VOC 2006 data set demonstrate that this method is significantly and consistently faster (approximately 30 times faster on average) than the original ESS. Our second algorithm is an approximate algorithm based on alternating search, whose computational complexity is typically O(n2). Experiments shows that (on average) it is 30 times faster again than our first algorithm, or 900 times faster than ESS. It is thus wellsuited for real time object detection.<br /
Efficient cross-validation of the complete two stages in KFD classifier formulation
This paper presents an efficient evaluation algorithm for cross-validating the two-stage approach of KFD classifiers. The proposed algorithm is of the same complexity level as the existing indirect efficient cross-validation methods but it is more reliable since it is direct and constitutes exact cross-validation for the KFD classifier formulation. Simulations demonstrate that the proposed algorithm is almost as fast as the existing fast indirect evaluation algorithm and the twostage cross-validation selects better models on most of the thirteen benchmark data sets.<br /
Face recognition via the overlapping energy histogram
In this paper we investigate the face recognition problem via the overlapping energy histogram of the DCT coefficients. Particularly, we investigate some important issues relating to the recognition performance, such as the issue of selecting threshold and the number of bins. These selection methods utilise information obtained from the training dataset. Experimentation is conducted on the Yale face database and results indicate that the proposed parameter selection methods perform well in selecting the threshold and number of bins. Furthermore, we show that the proposed overlapping energy histogram approach outperforms the Eigenfaces, 2DPCA and energy histogram significantly.<br /
Face recognition via incremental 2DPCA
Recently, the Two-Dimensional Principal Component Analysis (2DPCA) model is proposed and proved to be an efficient approach for face recognition. In this paper, we will investigate the incremental 2DPCA and develop a new constructive method for incrementally adding observation to the existing eigen-space model. An explicit formula for incremental learning is derived. In order to illustrate the effectiveness of the proposed approach, we performed some typical experiments and show that we can only keep the eigen-space of previous images and discard the raw images in the face recognition process. Furthermore, this proposed incremental approach is faster when compared to the batch method (2DPCD) and the recognition rate and reconstruction accuracy are as good as those obtained by the batch method.<br /
- …