270 research outputs found

    Ear Identification by Fusion of Segmented Slice Regions using Invariant Features: An Experimental Manifold with Dual Fusion Approach

    Full text link
    This paper proposes a robust ear identification system which is developed by fusing SIFT features of color segmented slice regions of an ear. The proposed ear identification method makes use of Gaussian mixture model (GMM) to build ear model with mixture of Gaussian using vector quantization algorithm and K-L divergence is applied to the GMM framework for recording the color similarity in the specified ranges by comparing color similarity between a pair of reference ear and probe ear. SIFT features are then detected and extracted from each color slice region as a part of invariant feature extraction. The extracted keypoints are then fused separately by the two fusion approaches, namely concatenation and the Dempster-Shafer theory. Finally, the fusion approaches generate two independent augmented feature vectors which are used for identification of individuals separately. The proposed identification technique is tested on IIT Kanpur ear database of 400 individuals and is found to achieve 98.25% accuracy for identification while top 5 matched criteria is set for each subject.Comment: 12 pages, 3 figure

    Fast fingerprint verification using sub-regions of fingerprint images.

    Get PDF
    Chan Ka Cheong.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 77-85).Abstracts in English and Chinese.Chapter 1. --- Introduction --- p.1Chapter 1.1 --- Introduction to Fingerprint Verification --- p.1Chapter 1.1.1 --- Biometrics --- p.1Chapter 1.1.2 --- Fingerprint History --- p.2Chapter 1.1.3 --- Fingerprint characteristics --- p.4Chapter 1.1.4 --- A Generic Fingerprint Matching System Architecture --- p.6Chapter 1.1.5 --- Fingerprint Verification and Identification --- p.8Chapter 1.1.7 --- Biometric metrics --- p.10Chapter 1.2 --- Embedded system --- p.12Chapter 1.2.1 --- Introduction to embedded systems --- p.12Chapter 1.2.2 --- Embedded systems characteristics --- p.12Chapter 1.2.3 --- Performance evaluation of a StrongARM processor --- p.13Chapter 1.3 --- Objective -An embedded fingerprint verification system --- p.16Chapter 1.4 --- Organization of the Thesis --- p.17Chapter 2 --- Literature Reviews --- p.18Chapter 2.1 --- Fingerprint matching overviews --- p.18Chapter 2.1.1 --- Minutiae-based fingerprint matching --- p.20Chapter 2.2 --- Fingerprint image enhancement --- p.21Chapter 2.3 --- Orientation field Computation --- p.22Chapter 2.4 --- Fingerprint Segmentation --- p.24Chapter 2.5 --- Singularity Detection --- p.25Chapter 2.6 --- Fingerprint Classification --- p.27Chapter 2.7 --- Minutia extraction --- p.30Chapter 2.7.1 --- Binarization and thinning --- p.30Chapter 2.7.2 --- Direct gray scale approach --- p.32Chapter 2.7.3 --- Comparison of the minutiae extraction approaches --- p.35Chapter 2.8 --- Minutiae matching --- p.37Chapter 2.8.1 --- Point matching --- p.37Chapter 2.8.2 --- Structural matching technique --- p.38Chapter 2.9 --- Summary --- p.40Chapter 3. --- Implementation --- p.41Chapter 3.1 --- Fast Fingerprint Matching System Overview --- p.41Chapter 3.1.1 --- Typical Fingerprint Matching System --- p.41Chapter 3.1.2. --- Fast Fingerprint Matching System Overview --- p.41Chapter 3.2 --- Orientation computation --- p.43Chapter 3.21 --- Orientation computation --- p.43Chapter 3.22 --- Smooth orientation field --- p.43Chapter 3.3 --- Fingerprint image segmentation --- p.45Chapter 3.4 --- Reference Point Extraction --- p.46Chapter 3.5 --- A Classification Scheme --- p.51Chapter 3.6 --- Finding A Small Fingerprint Matching Area --- p.54Chapter 3.7 --- Fingerprint Matching --- p.57Chapter 3.8 --- Minutiae extraction --- p.59Chapter 3.8.1 --- Ridge tracing --- p.59Chapter 3.8.2 --- cross sectioning --- p.60Chapter 3.8.3 --- local maximum determination --- p.61Chapter 3.8.4 --- Ridge tracing marking --- p.62Chapter 3.8.5 --- Ridge tracing stop criteria --- p.63Chapter 3.9 --- Optimization technique --- p.65Chapter 3.10 --- Summary --- p.66Chapter 4. --- Experimental results --- p.67Chapter 4.1 --- Experimental setup --- p.67Chapter 4.2 --- Fingerprint database --- p.67Chapter 4.3 --- Reference point accuracy --- p.67Chapter 4.4 --- Variable number of matching minutiae results --- p.68Chapter 4.5 --- Contribution of the verification prototype --- p.72Chapter 5. --- Conclusion and Future Research --- p.74Chapter 5.1 --- Conclusion --- p.74Chapter 5.2 --- Future Research --- p.74Bibliography --- p.7

    Improving acoustic vehicle classification by information fusion

    No full text
    We present an information fusion approach for ground vehicle classification based on the emitted acoustic signal. Many acoustic factors can contribute to the classification accuracy of working ground vehicles. Classification relying on a single feature set may lose some useful information if its underlying sound production model is not comprehensive. To improve classification accuracy, we consider an information fusion diagram, in which various aspects of an acoustic signature are taken into account and emphasized separately by two different feature extraction methods. The first set of features aims to represent internal sound production, and a number of harmonic components are extracted to characterize the factors related to the vehicle’s resonance. The second set of features is extracted based on a computationally effective discriminatory analysis, and a group of key frequency components are selected by mutual information, accounting for the sound production from the vehicle’s exterior parts. In correspondence with this structure, we further put forward a modifiedBayesian fusion algorithm, which takes advantage of matching each specific feature set with its favored classifier. To assess the proposed approach, experiments are carried out based on a data set containing acoustic signals from different types of vehicles. Results indicate that the fusion approach can effectively increase classification accuracy compared to that achieved using each individual features set alone. The Bayesian-based decision level fusion is found fusion is found to be improved than a feature level fusion approac

    Biometric Liveness Detection for the Fingerprint Recognition Technology

    Get PDF
    Tato práce je zaměřena na detekci živosti pro technologii rozpoznávání otisků prstů. V první části této práce je popsána biometrie, biometrické systémy, rozpoznávání živosti a je navržena metoda pro detekci živosti, která je založena na spektroskopických vlastnostech lidské kůže. Druhá část práce popisuje a shrnuje výsledky experimentů po implementaci této metody, v závěru práce jsou výsledky diskutovány a je nastíněna další možná práce.This work focuses on liveness detection for the fingerprint recognition technology. The first part of this thesis describes biometrics, biometric systems, liveness detection and the method for liveness detection is proposed, which is based on spectroscopic characteristics of human skin. The second part describes and summarizes performed experiments. In the end, the results are discussed and further improvements are proposed.

    Soft Biometric Analysis: MultiPerson and RealTime Pedestrian Attribute Recognition in Crowded Urban Environments

    Get PDF
    Traditionally, recognition systems were only based on human hard biometrics. However, the ubiquitous CCTV cameras have raised the desire to analyze human biometrics from far distances, without people attendance in the acquisition process. Highresolution face closeshots are rarely available at far distances such that facebased systems cannot provide reliable results in surveillance applications. Human soft biometrics such as body and clothing attributes are believed to be more effective in analyzing human data collected by security cameras. This thesis contributes to the human soft biometric analysis in uncontrolled environments and mainly focuses on two tasks: Pedestrian Attribute Recognition (PAR) and person reidentification (reid). We first review the literature of both tasks and highlight the history of advancements, recent developments, and the existing benchmarks. PAR and person reid difficulties are due to significant distances between intraclass samples, which originate from variations in several factors such as body pose, illumination, background, occlusion, and data resolution. Recent stateoftheart approaches present endtoend models that can extract discriminative and comprehensive feature representations from people. The correlation between different regions of the body and dealing with limited learning data is also the objective of many recent works. Moreover, class imbalance and correlation between human attributes are specific challenges associated with the PAR problem. We collect a large surveillance dataset to train a novel gender recognition model suitable for uncontrolled environments. We propose a deep residual network that extracts several posewise patches from samples and obtains a comprehensive feature representation. In the next step, we develop a model for multiple attribute recognition at once. Considering the correlation between human semantic attributes and class imbalance, we respectively use a multitask model and a weighted loss function. We also propose a multiplication layer on top of the backbone features extraction layers to exclude the background features from the final representation of samples and draw the attention of the model to the foreground area. We address the problem of person reid by implicitly defining the receptive fields of deep learning classification frameworks. The receptive fields of deep learning models determine the most significant regions of the input data for providing correct decisions. Therefore, we synthesize a set of learning data in which the destructive regions (e.g., background) in each pair of instances are interchanged. A segmentation module determines destructive and useful regions in each sample, and the label of synthesized instances are inherited from the sample that shared the useful regions in the synthesized image. The synthesized learning data are then used in the learning phase and help the model rapidly learn that the identity and background regions are not correlated. Meanwhile, the proposed solution could be seen as a data augmentation approach that fully preserves the label information and is compatible with other data augmentation techniques. When reid methods are learned in scenarios where the target person appears with identical garments in the gallery, the visual appearance of clothes is given the most importance in the final feature representation. Clothbased representations are not reliable in the longterm reid settings as people may change their clothes. Therefore, developing solutions that ignore clothing cues and focus on identityrelevant features are in demand. We transform the original data such that the identityrelevant information of people (e.g., face and body shape) are removed, while the identityunrelated cues (i.e., color and texture of clothes) remain unchanged. A learned model on the synthesized dataset predicts the identityunrelated cues (shortterm features). Therefore, we train a second model coupled with the first model and learns the embeddings of the original data such that the similarity between the embeddings of the original and synthesized data is minimized. This way, the second model predicts based on the identityrelated (longterm) representation of people. To evaluate the performance of the proposed models, we use PAR and person reid datasets, namely BIODI, PETA, RAP, Market1501, MSMTV2, PRCC, LTCC, and MIT and compared our experimental results with stateoftheart methods in the field. In conclusion, the data collected from surveillance cameras have low resolution, such that the extraction of hard biometric features is not possible, and facebased approaches produce poor results. In contrast, soft biometrics are robust to variations in data quality. So, we propose approaches both for PAR and person reid to learn discriminative features from each instance and evaluate our proposed solutions on several publicly available benchmarks.This thesis was prepared at the University of Beria Interior, IT Instituto de Telecomunicações, Soft Computing and Image Analysis Laboratory (SOCIA Lab), Covilhã Delegation, and was submitted to the University of Beira Interior for defense in a public examination session

    Masks: Maintaining Anonymity by Sequestering Key Statistics

    Get PDF
    High-resolution digital cameras are becoming ever-larger parts of our daily lives, whether as part of closed-circuit surveillance systems or as part of portable digital devices that many of us carry around with us. Combining the broadening reach of these cameras with automatic face recognition technology creates a sensor network that is ripe for abuse: our every action could be recorded and tagged with our identities, the date, and our location as if we each had an investigator tasked only with keeping each of us under constant surveillance. Adding the continually falling cost of data storage to this mix, and we are left with a situation where the privacy abuses don\u27t need to happen today: the stored imagery can be mined and re-mined forever, while the sophistication of automatic analysis continues to grow. The MASKS project takes the first steps toward addressing this problem. If we would like to be able to de-identify faces before the images are shared with others, we cannot do so with ad hoc techniques applied identically to all faces. Since each face is unique, the method of disguising that face must be equally unique. In order to hide or reduce those critical identifying characteristics, we are delivering the following foundational contributions toward characterizing the nature of facial information: - We have created a new pose-controlled, high-resolution database of facial images. - The most prominent anatomical markers on each face have been marked for position and shape, establishing a new gold standard for facial segmentation. - A parameterized model of the diversity of our subject population was built based on statistical analysis of the annotations. The model was validated by comparison with the performance of a standard set of artificial disguises

    Mixing Biometric Data For Generating Joint Identities and Preserving Privacy

    Get PDF
    Biometrics is the science of automatically recognizing individuals by utilizing biological traits such as fingerprints, face, iris and voice. A classical biometric system digitizes the human body and uses this digitized identity for human recognition. In this work, we introduce the concept of mixing biometrics. Mixing biometrics refers to the process of generating a new biometric image by fusing images of different fingers, different faces, or different irises. The resultant mixed image can be used directly in the feature extraction and matching stages of an existing biometric system. In this regard, we design and systematically evaluate novel methods for generating mixed images for the fingerprint, iris and face modalities. Further, we extend the concept of mixing to accommodate two distinct modalities of an individual, viz., fingerprint and iris. The utility of mixing biometrics is demonstrated in two different applications. The first application deals with the issue of generating a joint digital identity. A joint identity inherits its uniqueness from two or more individuals and can be used in scenarios such as joint bank accounts or two-man rule systems. The second application deals with the issue of biometric privacy, where the concept of mixing is used for de-identifying or obscuring biometric images and for generating cancelable biometrics. Extensive experimental analysis suggests that the concept of biometric mixing has several benefits and can be easily incorporated into existing biometric systems
    corecore