792 research outputs found

    The effects of scarring on face recognition

    Get PDF
    The focus of this research is the effects of scarring on face recognition. Face recognition is a common biometric modality implemented for access control operations such as customs and borders. The recent report from the Special Group on Issues Affecting Facial Recognition and Best Practices for their Mitigation highlighted scarring as one of the emerging challenges. The significance of this problem extends to the ISO/IEC and national agencies are researching to enhance their intelligence capabilities. Data was collected on face images with and without scars, using theatrical special effects to simulate scarring on the face and also from subjects that have developed scarring within their lifetime. A total of 60 subjects participated in this data collection, 30 without scarring of any kind and 30 with preexisting scars. Controlled data on scarring is problematic for face recognition research as scarring has various manifestations among individuals, yet is universal in that all individuals will manifest some degree of scarring. Effect analysis was done with controlled scarring to observe the factor alone, and wild scarring that is encountered during operations for realistic contextualization. Two environments were included in this study, a controlled studio that represented an ideal face capture setting and a mock border control booth simulating an operational use case

    Face Recognition: Study and Comparison of PCA and EBGM Algorithms

    Get PDF
    Face recognition is a complex and difficult process due to various factors such as variability of illumination, occlusion, face specific characteristics like hair, glasses, beard, etc., and other similar problems affecting computer vision problems. Using a system that offers robust and consistent results for face recognition, various applications such as identification for law enforcement, secure system access, computer human interaction, etc., can be automated successfully. Different methods exist to solve the face recognition problem. Principal component analysis, Independent component analysis, and linear discriminant analysis are few other statistical techniques that are commonly used in solving the face recognition problem. Genetic algorithm, elastic bunch graph matching, artificial neural network, etc. are few of the techniques that have been proposed and implemented. The objective of this thesis paper is to provide insight into different methods available for face recognition, and explore methods that provided an efficient and feasible solution. Factors affecting the result of face recognition and the preprocessing steps that eliminate such abnormalities are also discussed briefly. Principal Component Analysis (PCA) is the most efficient and reliable method known for at least past eight years. Elastic bunch graph matching (EBGM) technique is one of the promising techniques that we studied in this thesis work. We also found better results with EBGM method than PCA in the current thesis paper. We recommend use of a hybrid technique involving the EBGM algorithm to obtain better results. Though, the EBGM method took a long time to train and generate distance measures for the given gallery images compared to PCA. But, we obtained better cumulative match score (CMS) results for the EBGM in comparison to the PCA method. Other promising techniques that can be explored separately in other paper include Genetic algorithm based methods, Mixture of principal components, and Gabor wavelet techniques

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Nearest Neighbor Discriminant Analysis Based Face Recognition Using Ensembled Gabor Features

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Bilişim Enstitüsü, 2009Thesis (M.Sc.) -- İstanbul Technical University, Institute of Informatics, 2009Son yıllarda, ışık varyasyonlarına ve yüz ifade değişikliklerine karşı gürbüz olduğu üzere yüz tanıma alanında Gabor öznitelikleri tabanlı yüz temsil etme çok umut vaad edici sonuç vermiştir. Seçilen uzamsal frekans, uzamsal lokalizasyon ve yönelime göre yerel yapıyı hesaplaması, elle işaretlendirmeye ihtiyaç duymaması Gabor özniteliklerini efektif yapan özellikleridir. Bu tez çalışmasındaki katkı, Gabor süzgeçleri ve En Yakın Komşu Ayrışım Analizi'nin (EYKAA) güçlerini birleştirerek önemli ayrışım öznitelikleri ortaya çıkaran Gabor En Yakın Komşu Sınıflandırıcısı (GEYKS) genişletip Parçalı Gabor En Yakın Komşu Sınıflandırıcısı (PGEYKS) metodunu ortaya koymaktır. PGEYKS; alçaltılmış gabor öznitelikleri barındıran farklı segmanları kullanarak, her biri ayrı dizayn edilen birçok EYKAA tabanlı bileşen sınıflandırıcılarını bir araya getiren grup sınıflandırıcısıdır. Tüm gabor özniteliklerinin alçaltılmış boyutu tek bir EYKAA bileşeninden çıkarıldığı gibi, PGEYKS; ayrışım bilgi kaybını minimum yapıp 3S (yetersiz örnek miktarı) problemini önleyerek alçaltılmış gabor öznitelikleri içindeki ayrıştırabilirliği daha iyi kullanır. PGEYKS yönteminin tanıma başarımı karşılaştırmalı performans çalışması ile gösterilmiştir. Farklı ışıklandırma ve yüz ifadesi deişiklikleri barındıran 200 sınıflık FERET veritabanı alt kümesinde, 65 öznitelik için PGEYKS %100 başarım elde ederek atası olan GEYKS'nın aldığı %98 başarısını ve diğer GFS (Gabor Fisher Sınıflandırıcı) ve GTS (Gabor Temel Sınıflandırıcı) gibi standard methodlardan daha iyi sonuçlar vermiştir. Ayrıca YALE veritabanı üzerindeki testlerde PGEYKS her türlü (k, alpha) çiftleri için GEYKS'ten daha başarılıdır ve 14 öznitelik için step size = 5, k = 5, alpha = 3 parametlerinde %96 tanıma başarısına ulaşmıştır.In last decades, Gabor features based face representation performed very promising results in face recognition area as its robust to variations due to illumination and facial expression changes. The properties of Gabor are, which makes it effective, it computes the local structure corresponding to spatial frequency (scale), spatial localization, and orientation selectivity and no need for manual annotations. The contribution of this thesis, an Ensemble based Gabor Nearest Neighbor Classifier (EGNNC) method is proposed extending Gabor Nearest Neighbor Classifier (GNNC) where GNNC extracts important discriminant features both utilizing the power of Gabor filters and Nearest Neighbor Discriminant Analysis (NNDA). EGNNC is an ensemble classifier combining multiple NNDA based component classifiers designed respectively using different segments of the reduced Gabor feature. Since reduced dimension of the entire Gabor feature is extracted by one component NNDA classifier, EGNNC has better use of the discriminability implied in reduced Gabor features by the avoiding 3S (small sample size) problem as making minimum loss of discriminative information. The accuracy of the EGNNC is shown by comparative performance work. Using a 200 class subset of FERET database covering illumination and expression variations, EGNNC achieved 100% recognition rate, outperforming its ancestor GNNC perform 98 percent as well as standard methods such GFC and GPC for 65 features. Also for the YALE database, EGNNC outperformed GNNC on all (k, alpha) tuples and EGNNC reaches 96 percent accuracy in 14 feature dimension, along with parameters step size = 5, k = 5, alpha = 3.Yüksek LisansM.Sc

    NON-INVASIVE IMAGE ENHANCEMENT OF COLOUR RETINAL FUNDUS IMAGES FOR A COMPUTERISED DIABETIC RETINOPATHY MONITORING AND GRADING SYSTEM

    Get PDF
    Diabetic Retinopathy (DR) is a sight threatening complication due to diabetes mellitus affecting the retina. The pathologies of DR can be monitored by analysing colour fundus images. However, the low and varied contrast between retinal vessels and the background in colour fundus images remains an impediment to visual analysis in particular in analysing tiny retinal vessels and capillary networks. To circumvent this problem, fundus fluorescein angiography (FF A) that improves the image contrast is used. Unfortunately, it is an invasive procedure (injection of contrast dyes) that leads to other physiological problems and in the worst case may cause death. The objective of this research is to develop a non-invasive digital Image enhancement scheme that can overcome the problem of the varied and low contrast colour fundus images in order that the contrast produced is comparable to the invasive fluorescein method, and without introducing noise or artefacts. The developed image enhancement algorithm (called RETICA) is incorporated into a newly developed computerised DR system (called RETINO) that is capable to monitor and grade DR severity using colour fundus images. RETINO grades DR severity into five stages, namely No DR, Mild Non Proliferative DR (NPDR), Moderate NPDR, Severe NPDR and Proliferative DR (PDR) by enhancing the quality of digital colour fundus image using RETICA in the macular region and analysing the enlargement of the foveal avascular zone (F AZ), a region devoid of retinal vessels in the macular region. The importance of this research is to improve image quality in order to increase the accuracy, sensitivity and specificity of DR diagnosis, and to enable DR grading through either direct observation or computer assisted diagnosis system

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Hyperspectral image analysis for questioned historical documents.

    Get PDF
    This thesis describes the application of spectroscopy and hyperspectral image processing to examine historical manuscripts and text. Major activities in palaeographic and manuscript studies include the recovery of illegible or deleted text, the minute analyses of scribal hands, the identification of inks and the segmentation and dating of text. This thesis describes how Hyperspectral Imaging (HSI), applied in a novel manner, can be used to perform quality text recovery, segmentation and dating of historical documents. The non-destructive optical imaging process of Spectroscopy is described in detail and how it can be used to assist historians and document experts in the exemption of aged manuscripts. This non-destructive optical method of analysis can distinguish subtle differences in the reflectance properties of the materials under study. Many historically significant documents from libraries such as the Royal Irish Academy and the Russell Library at the National University of Ireland, Maynooth, have been the selected for study using the hyperspectral imaging technique. Processing techniques have are described for the applications to the study of manuscripts in a poor state of conservation. The research provides a comprehensive overview of Hyperspectral Imaging (HSI) and associated statistical and analytical methods, and also an in-depth investigation of the practical implementation of such methods to aid document analysts. Specifically, we provide results from employing statistical analytical methods including principal component analysis (PCA), independent component analysis (ICA) and both supervised and automatic clustering methods to historically significant manuscripts and text VIII such as Leabhar na hUidhre, a 12th century Irish text which was subject to part-erasure and rewriting, a 16th Century pastedown cover, and a multi-ink example typical of that found in, for example, late medieval administrative texts such as Gttingen’s kundige bok. The purpose of which is to achieve an overall greater insight into the historical context of the document, which includes the recovery or enhancement of faded or illegible text or text lost through fading, staining, overwriting or other forms of erasure. In addition, we demonstrate prospect of distinguishing different ink-types, and furnishing us with details of the manuscript’s composition, all of which are refinements, which can be used to answer questions about date and provenance. This process marks a new departure for the study of manuscripts and may provide answer many long-standing questions posed by palaeographers and by scholars in a variety of disciplines. Furthermore, through text retrieval, it holds out the prospect of adding considerably to the existing corpus of texts and to providing very many new research opportunities for coming generations of scholars
    corecore