15 research outputs found

    Human Face Identification by a Markov Random Field GroupWise Registration Technique

    Get PDF
    Face recognition is widely used in various applications like in bank applications, at airport or at ATM centre for security purposes etc. There are various methods used for face recognition problem. In this paper I propose new method known as Markov field GroupWise registration in which mean of all the faces from the database will be calculated first and then this mean will be compared with the testing image. To implement these modules, four open source databases like FERET, CAS-PEAL-R1, FRGC ver. 2.0, and the LFW are required. My work will achieve good result as compared to previous methods. DOI: 10.17762/ijritcc2321-8169.15052

    Face Recognition via Ensemble Sift Matching of Uncorrelated Hyperspectral Bands and Spectral PCTS

    Get PDF
    Face recognition is not a new area of study, but facial recognition using through hyperspectral images is a somewhat new concept which is still in its infancy. Although the conventional method of face recognition using Red-Green-Blue (RGB) or grayscale images has been advanced over the last twenty years, these methods are still shown to have weak performance whenever there are variations or changes in lighting, pose, or temporal aspect of the subjects. A hyperspectral representation of an image captures more information that is available within a scene than a RGB image therefore it is beneficial to study the performance of face recognition using a hyperspectral representation of the subjects\u27 faces. We studied the results of a variety of methods for performing face recognition using the Scale Invariant Transformation Feature (SIFT) algorithm as a matching function on uncorrelated spectral bands, principal component representation of the spectral bands, and the ensemble decision of the two. We conclude that there is no dominating method in the scope of our research; however, we do obtain three methods with leading performances despite some trade-off between performance at lower ranks and performance at higher ranks...that outperform the results obtained from a previous study which only considered a SIFT application on a single hyperspectral band which also performs very well under temporal variation

    A Software Engineered Voice-Enabled Job Recruitment Portal System

    Get PDF
    The inability of job seekers to get timely job information regarding the status of the application submitted via conventional job portal system which is usually dependent on accessibility to the Internet has made so many job applicants to lose their placements. Worse still, the epileptic services offered by Internet Service Providers and the poor infrastructures in most developing countries have greatly hindered the expected benefits from Internet usage. These have led to cases of online vacancies notifications unattended to simply because a job seeker is neither aware nor has access to the Internet. With an increasing patronage of mobile phones, a self-service job vacancy notification with audio functionality or an automated job vacancy notification to all qualified job seekers through mobile phones will simply provide a solution to these challenges. In this paper, we present a Voice-enabled Job Recruitment Portal (JRP) System. The system is accessed through two interfaces – the voice user’s interface (VUI) and web interface. The VUI was developed using VoiceXML and the web interface using PHP, and both interfaces integrated with Apache and MySQL as the middleware and back-end component respectively. The JRP proposed in this paper takes the hassle of job hunting from job seekers, provides job status information in real-time to the job seeker and offers other benefits such as, cost, effectiveness, speed, accuracy, ease of documentation, convenience and better logistics to the employer in seeking the right candidate for a job

    State of the Art in Face Recognition

    Get PDF
    Notwithstanding the tremendous effort to solve the face recognition problem, it is not possible yet to design a face recognition system with a potential close to human performance. New computer vision and pattern recognition approaches need to be investigated. Even new knowledge and perspectives from different fields like, psychology and neuroscience must be incorporated into the current field of face recognition to design a robust face recognition system. Indeed, many more efforts are required to end up with a human like face recognition system. This book tries to make an effort to reduce the gap between the previous face recognition research state and the future state

    Nearest Neighbor Discriminant Analysis Based Face Recognition Using Ensembled Gabor Features

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Bilişim Enstitüsü, 2009Thesis (M.Sc.) -- İstanbul Technical University, Institute of Informatics, 2009Son yıllarda, ışık varyasyonlarına ve yüz ifade değişikliklerine karşı gürbüz olduğu üzere yüz tanıma alanında Gabor öznitelikleri tabanlı yüz temsil etme çok umut vaad edici sonuç vermiştir. Seçilen uzamsal frekans, uzamsal lokalizasyon ve yönelime göre yerel yapıyı hesaplaması, elle işaretlendirmeye ihtiyaç duymaması Gabor özniteliklerini efektif yapan özellikleridir. Bu tez çalışmasındaki katkı, Gabor süzgeçleri ve En Yakın Komşu Ayrışım Analizi'nin (EYKAA) güçlerini birleştirerek önemli ayrışım öznitelikleri ortaya çıkaran Gabor En Yakın Komşu Sınıflandırıcısı (GEYKS) genişletip Parçalı Gabor En Yakın Komşu Sınıflandırıcısı (PGEYKS) metodunu ortaya koymaktır. PGEYKS; alçaltılmış gabor öznitelikleri barındıran farklı segmanları kullanarak, her biri ayrı dizayn edilen birçok EYKAA tabanlı bileşen sınıflandırıcılarını bir araya getiren grup sınıflandırıcısıdır. Tüm gabor özniteliklerinin alçaltılmış boyutu tek bir EYKAA bileşeninden çıkarıldığı gibi, PGEYKS; ayrışım bilgi kaybını minimum yapıp 3S (yetersiz örnek miktarı) problemini önleyerek alçaltılmış gabor öznitelikleri içindeki ayrıştırabilirliği daha iyi kullanır. PGEYKS yönteminin tanıma başarımı karşılaştırmalı performans çalışması ile gösterilmiştir. Farklı ışıklandırma ve yüz ifadesi deişiklikleri barındıran 200 sınıflık FERET veritabanı alt kümesinde, 65 öznitelik için PGEYKS %100 başarım elde ederek atası olan GEYKS'nın aldığı %98 başarısını ve diğer GFS (Gabor Fisher Sınıflandırıcı) ve GTS (Gabor Temel Sınıflandırıcı) gibi standard methodlardan daha iyi sonuçlar vermiştir. Ayrıca YALE veritabanı üzerindeki testlerde PGEYKS her türlü (k, alpha) çiftleri için GEYKS'ten daha başarılıdır ve 14 öznitelik için step size = 5, k = 5, alpha = 3 parametlerinde %96 tanıma başarısına ulaşmıştır.In last decades, Gabor features based face representation performed very promising results in face recognition area as its robust to variations due to illumination and facial expression changes. The properties of Gabor are, which makes it effective, it computes the local structure corresponding to spatial frequency (scale), spatial localization, and orientation selectivity and no need for manual annotations. The contribution of this thesis, an Ensemble based Gabor Nearest Neighbor Classifier (EGNNC) method is proposed extending Gabor Nearest Neighbor Classifier (GNNC) where GNNC extracts important discriminant features both utilizing the power of Gabor filters and Nearest Neighbor Discriminant Analysis (NNDA). EGNNC is an ensemble classifier combining multiple NNDA based component classifiers designed respectively using different segments of the reduced Gabor feature. Since reduced dimension of the entire Gabor feature is extracted by one component NNDA classifier, EGNNC has better use of the discriminability implied in reduced Gabor features by the avoiding 3S (small sample size) problem as making minimum loss of discriminative information. The accuracy of the EGNNC is shown by comparative performance work. Using a 200 class subset of FERET database covering illumination and expression variations, EGNNC achieved 100% recognition rate, outperforming its ancestor GNNC perform 98 percent as well as standard methods such GFC and GPC for 65 features. Also for the YALE database, EGNNC outperformed GNNC on all (k, alpha) tuples and EGNNC reaches 96 percent accuracy in 14 feature dimension, along with parameters step size = 5, k = 5, alpha = 3.Yüksek LisansM.Sc

    Automatic face recognition using stereo images

    Get PDF
    Face recognition is an important pattern recognition problem, in the study of both natural and artificial learning problems. Compaxed to other biometrics, it is non-intrusive, non- invasive and requires no paxticipation from the subjects. As a result, it has many applications varying from human-computer-interaction to access control and law-enforcement to crowd surveillance. In typical optical image based face recognition systems, the systematic vaxiability arising from representing the three-dimensional (3D) shape of a face by a two-dimensional (21)) illumination intensity matrix is treated as random vaxiability. Multiple examples of the face displaying vaxying pose and expressions axe captured in different imaging conditions. The imaging environment, pose and expressions are strictly controlled and the images undergo rigorous normalisation and pre-processing. This may be implemented in a paxtially or a fully automated system. Although these systems report high classification accuracies (>90%), they lack versatility and tend to fail when deployed outside laboratory conditions. Recently, more sophisticated 3D face recognition systems haxnessing the depth information have emerged. These systems usually employ specialist equipment such as laser scanners and structured light projectors. Although more accurate than 2D optical image based recognition, these systems are equally difficult to implement in a non-co-operative environment. Existing face recognition systems, both 2D and 3D, detract from the main advantages of face recognition and fail to fully exploit its non-intrusive capacity. This is either because they rely too much on subject co-operation, which is not always available, or because they cannot cope with noisy data. The main objective of this work was to investigate the role of depth information in face recognition in a noisy environment. A stereo-based system, inspired by the human binocular vision, was devised using a pair of manually calibrated digital off-the-shelf cameras in a stereo setup to compute depth information. Depth values extracted from 2D intensity images using stereoscopy are extremely noisy, and as a result this approach for face recognition is rare. This was cofirmed by the results of our experimental work. Noise in the set of correspondences, camera calibration and triangulation led to inaccurate depth reconstruction, which in turn led to poor classifier accuracy for both 3D surface matching and 211) 2 depth maps. Recognition experiments axe performed on the Sheffield Dataset, consisting 692 images of 22 individuals with varying pose, illumination and expressions

    Restoration and Domain Adaptation for Unconstrained Face Recognition

    Get PDF
    Face recognition (FR) has received great attention and tremendous progress has been made during the past two decades. While FR at close range under controlled acquisition conditions has achieved a high level of performance, FR at a distance under unconstrained environment remains a largely unsolved problem. This is because images collected from a distance usually suffer from blur, poor illumination, pose variation etc. In this dissertation, we present models and algorithms to compensate for these variations to improve the performance for FR at a distance. Blur is a common factor contributing to the degradation of images collected from a distance, e.g., defocus blur due to long range acquisition, motion blur due to movement of subjects. For this purpose, we study the image deconvolution problem. This is an ill-posed problem, and solutions are usually obtained by exploiting prior information of desired output image to reduce ambiguity, typically through the Bayesian framework. In this dissertation, we consider the role of an example driven manifold prior to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class in the form of a patch manifold to effectively regularize the inverse problem. We propose both parametric and non-parametric approaches to implicitly estimate the manifold prior from the given unlabeled data. Extensive experiments show that our method performs better than many competitive image deconvolution methods. More often, variations from the collected images at a distance are difficult to address through physical models of individual degradations. For this problem, we utilize domain adaptation methods to adapt recognition systems to the test data. Domain adaptation addresses the problem where data instances of a source domain have different distributions from that of a target domain. We focus on the unsupervised domain adaptation problem where labeled data are not available in the target domain. We propose to interpolate subspaces through dictionary learning to link the source and target domains. These subspaces are able to capture the intrinsic domain shift and form a shared feature representation for cross domain recognition. Experimental results on publicly available datasets demonstrate the effectiveness of our approach for face recognition across pose, blur and illumination variations, and cross dataset object classification. Most existing domain adaptation methods assume homogeneous source domain which is usually modeled by a single subspace. Yet in practice, oftentimes we are given mixed source data with different inner characteristics. Modeling these source data as a single domain would potentially deteriorate the adaptation performance, as the adaptation procedure needs to account for the large within class variations in the source domain. For this problem, we propose two approaches to mitigate the heterogeneity in source data. We first present an approach for selecting a subset of source samples which is more similar to the target domain to avoid negative knowledge transfer. We then consider the scenario that the heterogenous source data are due to multiple latent domains. For this purpose, we derive a domain clustering framework to recover the latent domains for improved adaptation. Moreover, we formulate submodular objective functions which can be solved by an efficient greedy method. Experimental results show that our approaches compare favorably with the state-of-the-art

    Facial Analysis: Looking at Biometric Recognition and Genome-Wide Association

    Get PDF

    Biometric face recognition using multilinear projection and artificial intelligence

    Get PDF
    PhD ThesisNumerous problems of automatic facial recognition in the linear and multilinear subspace learning have been addressed; nevertheless, many difficulties remain. This work focuses on two key problems for automatic facial recognition and feature extraction: object representation and high dimensionality. To address these problems, a bidirectional two-dimensional neighborhood preserving projection (B2DNPP) approach for human facial recognition has been developed. Compared with 2DNPP, the proposed method operates on 2-D facial images and performs reductions on the directions of both rows and columns of images. Furthermore, it has the ability to reveal variations between these directions. To further improve the performance of the B2DNPP method, a new B2DNPP based on the curvelet decomposition of human facial images is introduced. The curvelet multi- resolution tool enhances the edges representation and other singularities along curves, and thus improves directional features. In this method, an extreme learning machine (ELM) classifier is used which significantly improves classification rate. The proposed C-B2DNPP method decreases error rate from 5.9% to 3.5%, from 3.7% to 2.0% and from 19.7% to 14.2% using ORL, AR, and FERET databases compared with 2DNPP. Therefore, it achieves decreases in error rate more than 40%, 45%, and 27% respectively with the ORL, AR, and FERET databases. Facial images have particular natural structures in the form of two-, three-, or even higher-order tensors. Therefore, a novel method of supervised and unsupervised multilinear neighborhood preserving projection (MNPP) is proposed for face recognition. This allows the natural representation of multidimensional images 2-D, 3-D or higher-order tensors and extracts useful information directly from tensotial data rather than from matrices or vectors. As opposed to a B2DNPP which derives only two subspaces, in the MNPP method multiple interrelated subspaces are obtained over different tensor directions, so that the subspaces are learned iteratively by unfolding the tensor along the different directions. The performance of the MNPP has performed in terms of the two modes of facial recognition biometrics systems of identification and verification. The proposed supervised MNPP method achieved decrease over 50.8%, 75.6%, and 44.6% in error rate using ORL, AR, and FERET databases respectively, compared with 2DNPP. Therefore, the results demonstrate that the MNPP approach obtains the best overall performance in various learning scenarios
    corecore