136 research outputs found

    Face recognition by using discriminative common vectors

    Get PDF
    Abstract In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set

    Face Recognition Based on Videos by Using Convex Hulls

    Get PDF
    International audienceA wide range of face appearance variations can be modeled by using set based recognition approaches effectively, but computational complexity of current methods is highly dependent on the set and class sizes. This paper introduces new video based classification methods designed for reducing the required disk space of data samples and speed up the testing process in large-scale face recognition systems. In the proposed method, image sets collected from videos are approximated with kernelized convex hulls and it was shown that it is sufficient to use only the samples that participate in shaping the image set boundaries in this setting. The kernelized Support Vector Data Description (SVDD) is used to extract those important samples that form the image set boundaries. Moreover, we show that these kernelized hypersphere models can also be used to approximate image sets for classification purposes. Then, we propose a binary hierarchical decision tree approach to improve the speed of the classification system even more. Lastly, we introduce a new video database that includes 285 people with 8 videos of each person since the most popular video data sets used for set based recognition methods include either a few people, or small number of videos per person. The experimental results on varying sized databases show that the proposed methods greatly improve the testing times of the classification system (we obtained speed-ups to a factor of 20) without a significant drop in accuracies

    A Supervised Clustering Algorithm for the Initialization of RBF Neural Network Classifiers

    Get PDF
    International audienceIn this paper, we propose a new supervised clustering algorithm, coined as the Homogeneous Clustering (HC), to find the number and initial locations of the hidden units in Radial Basis Function (RBF) neural network classifiers. In contrast to the traditional clustering algorithms introduced for this goal, the proposed algorithm is a supervised procedure where the number and initial locations of the hidden units are determined based on split of the clusters having overlaps among the classes. The basic idea of the proposed approach is to create class specific homogenous clusters where the corresponding samples are closer to their mean than the means of rival clusters belonging to other class categories. We tested the proposed clustering algorithm along with the RBF network classifier on the Graz02 object database and the ORL face database. The experimental results show that the RBF network classifier performs better when it is initialized with the proposed HC algorithm than an unsupervised k-means algorithm. Moreover, our recognition results exceed the best published results on the Graz02 database and they are comparable to the best results on the ORL face database indicating that the proposed clustering algorithm initializes the hidden unit parameters successfully

    Semi-supervised dimensionality reduction using pairwise equivalence constraints

    Get PDF
    International audienceTo deal with the problem of insufficient labeled data, usually side information -- given in the form of pairwise equivalence constraints between points -- is used to discover groups within data. However, existing methods using side information typically fail in cases with high-dimensional spaces. In this paper, we address the problem of learning from side information for high-dimensional data. To this end, we propose a semi-supervised dimensionality reduction scheme that incorporates pairwise equivalence constraints for finding a better embedding space, which improves the performance of subsequent clustering and classification phases. Our method builds on the assumption that points in a sufficiently small neighborhood tend to have the same label. Equivalence constraints are employed to modify the neighborhoods and to increase the separability of different classes. Experimental results on high-dimensional image data sets show that integrating side information into the dimensionality reduction improves the clustering and classification performance

    Improving Sparse Representation-Based Classification Using Local Principal Component Analysis

    Full text link
    Sparse representation-based classification (SRC), proposed by Wright et al., seeks the sparsest decomposition of a test sample over the dictionary of training samples, with classification to the most-contributing class. Because it assumes test samples can be written as linear combinations of their same-class training samples, the success of SRC depends on the size and representativeness of the training set. Our proposed classification algorithm enlarges the training set by using local principal component analysis to approximate the basis vectors of the tangent hyperplane of the class manifold at each training sample. The dictionary in SRC is replaced by a local dictionary that adapts to the test sample and includes training samples and their corresponding tangent basis vectors. We use a synthetic data set and three face databases to demonstrate that this method can achieve higher classification accuracy than SRC in cases of sparse sampling, nonlinear class manifolds, and stringent dimension reduction.Comment: Published in "Computational Intelligence for Pattern Recognition," editors Shyi-Ming Chen and Witold Pedrycz. The original publication is available at http://www.springerlink.co

    Generating One Biometric Feature from Another: Faces from Fingerprints

    Get PDF
    This study presents a new approach based on artificial neural networks for generating one biometric feature (faces) from another (only fingerprints). An automatic and intelligent system was designed and developed to analyze the relationships among fingerprints and faces and also to model and to improve the existence of the relationships. The new proposed system is the first study that generates all parts of the face including eyebrows, eyes, nose, mouth, ears and face border from only fingerprints. It is also unique and different from similar studies recently presented in the literature with some superior features. The parameter settings of the system were achieved with the help of Taguchi experimental design technique. The performance and accuracy of the system have been evaluated with 10-fold cross validation technique using qualitative evaluation metrics in addition to the expanded quantitative evaluation metrics. Consequently, the results were presented on the basis of the combination of these objective and subjective metrics for illustrating the qualitative properties of the proposed methods as well as a quantitative evaluation of their performances. Experimental results have shown that one biometric feature can be determined from another. These results have once more indicated that there is a strong relationship between fingerprints and faces

    Cloud-based scalable object detection and classification in video streams

    Get PDF
    Due to the recent advances in cameras, cell phones and camcorders, particularly the resolution at which they can record an image/video, large amounts of data are generated daily. This video data is often so large that manually inspecting it for useful content can be time consuming and error prone, thereby it requires automated analysis to extract useful information and metadata. Existing video analysis systems lack automation, scalability and operate under a supervised learning domain, requiring substantial amounts of labelled data and training time. We present a cloud-based, automated video analysis system to process large numbers of video streams, where the underlying infrastructure is able to scale based on the number and size of the stream(s) being considered. The system automates the video analysis process and reduces manual intervention. An operator using this system only specifies which object of interest is to be located from the video streams. Video streams are then automatically fetched from the cloud storage and analysed in an unsupervised way. The proposed system was able to locate and classify an object of interest from one month of recorded video streams comprising 175 GB in size on a 15 node cloud in 6.52 h. The GPU powered infrastructure took 3 h to accomplish the same task. Occupancy of GPU resources in cloud is optimized and data transfer between CPU and GPU is minimized to achieve high performance. The scalability of the system is demonstrated along with a classification accuracy of 95%

    Digital hyperplane fitting

    Get PDF
    International audienceThis paper addresses the hyperplane fitting problem of discrete points in any dimension (i.e. in Z d). For that purpose, we consider a digital model of hyperplane, namely digital hyperplane, and present a combinatorial approach to find the optimal solution of the fitting problem. This method consists in computing all possible digital hyperplanes from a set S of n points, then an exhaustive search enables us to find the optimal hyperplane that best fits S. The method has, however, a high complexity of O(n d), and thus can not be applied for big datasets. To overcome this limitation, we propose another method relying on the Delaunay triangulation of S. By not generating and verifying all possible digital hyperplanes but only those from the elements of the triangula-tion, this leads to a lower complexity of O(n d 2 +1). Experiments in 2D, 3D and 4D are shown to illustrate the efficiency of the proposed method
    corecore