18,329 research outputs found

    Face Recognition with One Sample Image per Class

    Get PDF
    There are two main approaches for face recognition with variations in lighting conditions. One is to represent images with features that are insensitive to illumination in the first place. The other main approach is to construct a linear subspace for every class under the different lighting conditions. Both of these techniques are successfully applied to some extent in face recognition, but it is hard to extend them for recognition with variant facial expressions. It is observed that features insensitive to illumination are highly sensitive to expression variations, which result in face recognition with changes in both lighting conditions and expressions a difficult task. We propose a new method called Affine Principal Components Analysis in an attempt to solve both of these problems. This method extract features to construct a subspace for face representation and warps this space to achieve better class separation. The proposed technique is evaluated using face databases with both variable lighting and facial expressions. We achieve more than 90% accuracy for face recognition by using only one sample image per class

    Robust Image Recognition Based on a New Supervised Kernel Subspace Learning Method

    Get PDF
    Fecha de lectura de Tesis Doctoral: 13 de septiembre 2019Image recognition is a term for computer technologies that can recognize certain people, objects or other targeted subjects through the use of algorithms and machine learning concepts. Face recognition is one of the most popular techniques to achieve the goal of figuring out the identity of a person. This study has been conducted to develop a new non-linear subspace learning method named “supervised kernel locality-based discriminant neighborhood embedding,” which performs data classification by learning an optimum embedded subspace from a principal high dimensional space. In this approach, not only is a nonlinear and complex variation of face images effectively represented using nonlinear kernel mapping, but local structure information of data from the same class and discriminant information from distinct classes are also simultaneously preserved to further improve final classification performance. Moreover, to evaluate the robustness of the proposed method, it was compared with several well-known pattern recognition methods through comprehensive experiments with six publicly accessible datasets. In this research, we particularly focus on face recognition however, two other types of databases rather than face databases are also applied to well investigate the implementation of our algorithm. Experimental results reveal that our method consistently outperforms its competitors across a wide range of dimensionality on all the datasets. SKLDNE method has reached 100 percent of recognition rate for Tn=17 on the Sheffield, 9 on the Yale, 8 on the ORL, 7 on the Finger vein and 11on the Finger Knuckle respectively, while the results are much lower for other methods. This demonstrates the robustness and effectiveness of the proposed method

    Robust Face Representation and Recognition Under Low Resolution and Difficult Lighting Conditions

    Get PDF
    This dissertation focuses on different aspects of face image analysis for accurate face recognition under low resolution and poor lighting conditions. A novel resolution enhancement technique is proposed for enhancing a low resolution face image into a high resolution image for better visualization and improved feature extraction, especially in a video surveillance environment. This method performs kernel regression and component feature learning in local neighborhood of the face images. It uses directional Fourier phase feature component to adaptively lean the regression kernel based on local covariance to estimate the high resolution image. For each patch in the neighborhood, four directional variances are estimated to adapt the interpolated pixels. A Modified Local Binary Pattern (MLBP) methodology for feature extraction is proposed to obtain robust face recognition under varying lighting conditions. Original LBP operator compares pixels in a local neighborhood with the center pixel and converts the resultant binary string to 8-bit integer value. So, it is less effective under difficult lighting conditions where variation between pixels is negligible. The proposed MLBP uses a two stage encoding procedure which is more robust in detecting this variation in a local patch. A novel dimensionality reduction technique called Marginality Preserving Embedding (MPE) is also proposed for enhancing the face recognition accuracy. Unlike Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which project data in a global sense, MPE seeks for a local structure in the manifold. This is similar to other subspace learning techniques but the difference with other manifold learning is that MPE preserves marginality in local reconstruction. Hence it provides better representation in low dimensional space and achieves lower error rates in face recognition. Two new concepts for robust face recognition are also presented in this dissertation. In the first approach, a neural network is used for training the system where input vectors are created by measuring distance from each input to its class mean. In the second approach, half-face symmetry is used, realizing the fact that the face images may contain various expressions such as open/close eye, open/close mouth etc., and classify the top half and bottom half separately and finally fuse the two results. By performing experiments on several standard face datasets, improved results were observed in all the new proposed methodologies. Research is progressing in developing a unified approach for the extraction of features suitable for accurate face recognition in a long range video sequence in complex environments

    Face recognition, a landmarks tale

    Get PDF
    Face recognition is a technology that appeals to the imagination of many people. This is particularly reflected in the popularity of science-fiction films and forensic detective series such as CSI, CSI New York, CSI Miami, Bones and NCIS.\ud \ud Although these series tend to be set in the present, their application of face recognition should be considered science-fiction. The successes are not, or at least not yet, realistic. This does, however, not mean that it does not, or will never, work. To the contrary, face recognition is used in places where the user does not need or want to cooperate, for example entry to stadiums or stations, or the detection of double entries into databases. Another important reason to use face recognition is that it can be a user-friendly biometric security.\ud \ud Face recognition works reliably and robustly when there is little variance in pose in the images used. In order to eliminate variance, the faces are aligned to a reference. For this we will use a set of landmarks. Landmarks are points which are easy recognisable locations on the face such as the eyes, nose and mouth. \ud \ud A probabilistic, maximum a posteriori approach to finding landmarks in a facial image is proposed, which provides a theoretical framework for template based landmarkers. One such landmarker, based on a likelihood ratio detector, is discussed in detail. Special attention is paid to training and implementation issues, in order to minimize storage and processing requirements. In particular, a fast approximate singular value decomposition method is proposed to speed up the training process and an implementation of the landmarker in the Fourier domain is presented that will speed up the search process. A subspace method for outlier correction and an alternative implementation of the landmarker are shown to improve its accuracy. The impact of carefully tuning the many parameters of the method is shown. The method is extensively tested and compared with alternatives.\ud \ud Although state of the art face recognition still has a giant leap to make, before it is as good as on television, small steps are made by men all the time. \ud \u

    Bags of Affine Subspaces for Robust Object Tracking

    Full text link
    We propose an adaptive tracking algorithm where the object is modelled as a continuously updated bag of affine subspaces, with each subspace constructed from the object's appearance over several consecutive frames. In contrast to linear subspaces, affine subspaces explicitly model the origin of subspaces. Furthermore, instead of using a brittle point-to-subspace distance during the search for the object in a new frame, we propose to use a subspace-to-subspace distance by representing candidate image areas also as affine subspaces. Distances between subspaces are then obtained by exploiting the non-Euclidean geometry of Grassmann manifolds. Experiments on challenging videos (containing object occlusions, deformations, as well as variations in pose and illumination) indicate that the proposed method achieves higher tracking accuracy than several recent discriminative trackers.Comment: in International Conference on Digital Image Computing: Techniques and Applications, 201

    Multi-View Face Recognition From Single RGBD Models of the Faces

    Get PDF
    This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks
    • …
    corecore