13 research outputs found

    Multi-biometric templates using fingerprint and voice

    Get PDF
    As biometrics gains popularity, there is an increasing concern about privacy and misuse of biometric data held in central repositories. Furthermore, biometric verification systems face challenges arising from noise and intra-class variations. To tackle both problems, a multimodal biometric verification system combining fingerprint and voice modalities is proposed. The system combines the two modalities at the template level, using multibiometric templates. The fusion of fingerprint and voice data successfully diminishes privacy concerns by hiding the minutiae points from the fingerprint, among the artificial points generated by the features obtained from the spoken utterance of the speaker. Equal error rates are observed to be under 2% for the system where 600 utterances from 30 people have been processed and fused with a database of 400 fingerprints from 200 individuals. Accuracy is increased compared to the previous results for voice verification over the same speaker database

    Studi Akurasi Karakteristik Retina sebagai Future Identification dengan Euclidean Distance Metrics

    Get PDF
    Penelitian ini menghasilkan sistem keamanan menggunakan biometrik, dengan menggunakan retina sebagai identitas pengenalan yang akurat, serta efektif untuk meningkatkan proses identifikasi pada retina dimasa depan (future identification). Hal ini sangat penting untuk menentukan keakuratan sifat biometrik apa yang paling baik di dalam proses mengidentifikasi di masa depan, sekaligus membangun suatu sistem aplikasi atau tools yang dapat digunakan untuk mengetahui karakteristik distance meterics untuk mengukur akurasi retina sebagai identitas dimasa depan (future identification). Penggunaan retina dapat menjadi salah satu alternatif identifikasi manusia  seperti  untuk  pengganti  PIN  ATM  Bank,  Paspor  dan bidang-bidang lain yang memerlukan tingkat keamanan tinggi atau mustahil untuk dapat dipalsukan. Hasil dari penelitian ini ialah berbentuk pengujian untuk membuktikan tingkat akurasi CBIR dengan menggunakan citra query dengan dibangun database sebanyak 5.000 citra retina. Metode yang akan digunakan dalam menentukan similarity dan identification dengan menggunakan fitur warna. Histogram warna untuk pencarian citra dikerjakan dengan mengitung jumlah koefisien DCT dari setiap warna. Hasil penelitian menunjukan bahwa akurasi algoritma mendekati nilai 90%, akurasi ini cukup bagus di bidang image retrieval.  Di lihat dari kecepatan proses retrieval juga cukup cepat dimana rata –rata kecepatan proses dengan menggunakan 2.000 citra digital adalah kurang dari 10 detik

    Fingerprint Orientation Refinement Through Iterative Smoothing

    Get PDF
    We propose a new gradient-based method for the extraction of the orientation field associated to a fingerprint, and a regularisation procedure to improve the orientation field computed from noisy fingerprint images. The regularisation algorithm is based on three new integral operators, introduced and discussed in this paper. A pre-processing technique is also proposed to achieve better performances of the algorithm. The results of a numerical experiment are reported to give an evidence of the efficiency of the proposed algorithm

    Implementation of Minutiae Based Fingerprint Identification System using Crossing Number Concept

    Get PDF
    Abstract: Biometric system is essentially a pattern recognition system which recognizes a person by determining the authenticity of a specific physiological (e.g., fingerprints, face, retina, iris) or behavioral (e.g., gait, signature

    A Novel Algorithm for Minutiae Matching

    Full text link

    Fingerprint Recognition

    Get PDF

    Vision Based Extraction of Nutrition Information from Skewed Nutrition Labels

    Get PDF
    An important component of a healthy diet is the comprehension and retention of nutritional information and understanding of how different food items and nutritional constituents affect our bodies. In the U.S. and many other countries, nutritional information is primarily conveyed to consumers through nutrition labels (NLs) which can be found in all packaged food products. However, sometimes it becomes really challenging to utilize all this information available in these NLs even for consumers who are health conscious as they might not be familiar with nutritional terms or find it difficult to integrate nutritional data collection into their daily activities due to lack of time, motivation, or training. So it is essential to automate this data collection and interpretation process by integrating Computer Vision based algorithms to extract nutritional information from NLs because it improves the user’s ability to engage in continuous nutritional data collection and analysis. To make nutritional data collection more manageable and enjoyable for the users, we present a Proactive NUTrition Management System (PNUTS). PNUTS seeks to shift current research and clinical practices in nutrition management toward persuasion, automated nutritional information processing, and context-sensitive nutrition decision support. PNUTS consists of two modules, firstly a barcode scanning module which runs on smart phones and is capable of vision-based localization of One Dimensional (1D) Universal Product Code (UPC) and International Article Number (EAN) barcodes with relaxed pitch, roll, and yaw camera alignment constraints. The algorithm localizes barcodes in images by computing Dominant Orientations of Gradients (DOGs) of image segments and grouping smaller segments with similar DOGs into larger connected components. Connected components that pass given morphological criteria are marked as potential barcodes. The algorithm is implemented in a distributed, cloud-based system. The system’s front end is a smartphone application that runs on Android smartphones with Android 4.2 or higher. The system’s back end is deployed on a five node Linux cluster where images are processed. The algorithm was evaluated on a corpus of 7,545 images extracted from 506 videos of bags, bottles, boxes, and cans in a supermarket. The DOG algorithm was coupled to our in-place scanner for 1D UPC and EAN barcodes. The scanner receives from the DOG algorithm the rectangular planar dimensions of a connected component and the component’s dominant gradient orientation angle referred to as the skew angle. The scanner draws several scan lines at that skew angle within the component to recognize the barcode in place without any rotations. The scanner coupled to the localizer was tested on the same corpus of 7,545 images. Laboratory experiments indicate that the system can localize and scan barcodes of any orientation in the yaw plane, of up to 73.28 degrees in the pitch plane, and of up to 55.5 degrees in the roll plane. The videos have been made public for all interested research communities to replicate our findings or to use them in their own research. The front end Android application is available for free download at Google Play under the title of NutriGlass. This module is also coupled to a comprehensive NL database from which nutritional information can be retrieved on demand. Currently our NL database consists of more than 230,000 products. The second module of PNUTS is an algorithm whose objective is to determine the text skew angle of an NL image without constraining the angle’s magnitude. The horizontal, vertical, and diagonal matrices of the (Two Dimensional) 2D Haar Wavelet Transform are used to identify 2D points with significant intensity changes. The set of points is bounded with a minimum area rectangle whose rotation angle is the text’s skew. The algorithm’s performance is compared with the performance of five text skew detection algorithms on 1001 U.S. nutrition label images and 2200 single- and multi-column document images in multiple languages. To ensure the reproducibility of the reported results, the source code of the algorithm and the image data have been made publicly available. If the skew angle is estimated correctly, optical character recognition (OCR) techniques can be used to extract nutrition information

    Quantifying the Limits of Fingerprint Variability

    Get PDF
    Fingerprints are one of the most widely used identification features in both the biometric and forensic fields. However, the comparison and identification of fingerprints is made difficult by fingerprint variability arising from distortion. This study quantifies the limits of fingerprint variability when subject to heavy distortion, and the variability observed in repeated inked planar impressions. Fingers were video recorded performing several distortion conditions under heavy deposition pressure: left, right, up, and down translation of the finger, clockwise and counter-clockwise torque of the finger, and planar impressions. Fingerprint templates, containing `true\u27 minutiae locations, were then created from 10 inked planar impressions for 30 separate fingers. The 30 fingers studied consisted of 10 right slant loops, 10 plain arches, and 10 plain whorls. A minimal amount of variability, .18 mm globally, was observed for minutiae in inked planar impressions. When subject to heavy distortion minutiae can be displaced by upwards of 3 mm and their orientation altered by as much as 30 degrees. Minutiae displacements of 1 mm and 10 degree changes in orientation are readily observed. The results of this study will allow fingerprint examiners to identify and understand the degree of variability that can be reasonably expected throughout the various regions of fingerprints

    Identity verification using voice and its use in a privacy preserving system

    Get PDF
    Since security has been a growing concern in recent years, the field of biometrics has gained popularity and became an active research area. Beside new identity authentication and recognition methods, protection against theft of biometric data and potential privacy loss are current directions in biometric systems research. Biometric traits which are used for verification can be grouped into two: physical and behavioral traits. Physical traits such as fingerprints and iris patterns are characteristics that do not undergo major changes over time. On the other hand, behavioral traits such as voice, signature, and gait are more variable; they are therefore more suitable to lower security applications. Behavioral traits such as voice and signature also have the advantage of being able to generate numerous different biometric templates of the same modality (e.g. different pass-phrases or signatures), in order to provide cancelability of the biometric template and to prevent crossmatching of different databases. In this thesis, we present three new biometric verification systems based mainly on voice modality. First, we propose a text-dependent (TD) system where acoustic features are extracted from individual frames of the utterances, after they are aligned via phonetic HMMs. Data from 163 speakers from the TIDIGITS database are employed for this work and the best equal error rate (EER) is reported as 0.49% for 6-digit user passwords. Second, a text-independent (TI) speaker verification method is implemented inspired by the feature extraction method utilized for our text-dependent system. Our proposed TI system depends on creating speaker specific phoneme codebooks. Once phoneme codebooks are created on the enrollment stage using HMM alignment and segmentation to extract discriminative user information, test utterances are verified by calculating the total dissimilarity/distance to the claimed codebook. For benchmarking, a GMM-based TI system is implemented as a baseline. The results of the proposed TD system (0.22% EER for 7-digit passwords) is superior compared to the GMM-based system (0.31% EER for 7-digit sequences) whereas the proposed TI system yields worse results (5.79% EER for 7-digit sequences) using the data of 163 people from the TIDIGITS database . Finally, we introduce a new implementation of the multi-biometric template framework of Yanikoglu and Kholmatov [12], using fingerprint and voice modalities. In this framework, two biometric data are fused at the template level to create a multi-biometric template, in order to increase template security and privacy. The current work aims to also provide cancelability by exploiting the behavioral aspect of the voice modality

    Novel active sweat pores based liveness detection techniques for fingerprint biometrics

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Liveness detection in automatic fingerprint identification systems (AFIS) is an issue which still prevents its use in many unsupervised security applications. In the last decade, various hardware and software solutions for the detection of liveness from fingerprints have been proposed by academic research groups. However, the proposed methods have not yet been practically implemented with existing AFIS. A large amount of research is needed before commercial AFIS can be implemented. In this research, novel active pore based liveness detection methods were proposed for AFIS. These novel methods are based on the detection of active pores on fingertip ridges, and the measurement of ionic activity in the sweat fluid that appears at the openings of active pores. The literature is critically reviewed in terms of liveness detection issues. Existing fingerprint technology, and hardware and software solutions proposed for liveness detection are also examined. A comparative study has been completed on the commercially and specifically collected fingerprint databases, and it was concluded that images in these datasets do not contained any visible evidence of liveness. They were used to test various algorithms developed for liveness detection; however, to implement proper liveness detection in fingerprint systems a new database with fine details of fingertips is needed. Therefore a new high resolution Brunel Fingerprint Biometric Database (B-FBDB) was captured and collected for this novel liveness detection research. The first proposed novel liveness detection method is a High Pass Correlation Filtering Algorithm (HCFA). This image processing algorithm has been developed in Matlab and tested on B-FBDB dataset images. The results of the HCFA algorithm have proved the idea behind the research, as they successfully demonstrated the clear possibility of liveness detection by active pore detection from high resolution images. The second novel liveness detection method is based on the experimental evidence. This method explains liveness detection by measuring the ionic activities above the sample of ionic sweat fluid. A Micro Needle Electrode (MNE) based setup was used in this experiment to measure the ionic activities. In results, 5.9 pC to 6.5 pC charges were detected with ten NME positions (50ÎĽm to 360 ÎĽm) above the surface of ionic sweat fluid. These measurements are also a proof of liveness from active fingertip pores, and this technique can be used in the future to implement liveness detection solutions. The interaction of NME and ionic fluid was modelled in COMSOL multiphysics, and the effect of electric field variations on NME was recorded at 5ÎĽm -360ÎĽm positions above the ionic fluid.This study is funded by the University of Sindh, Jamshoro, Pakistan and the Higher Education Commission of Pakistan
    corecore