4,503 research outputs found

    Automatic facial recognition based on facial feature analysis

    Get PDF

    Multimodal Biometrics Enhancement Recognition System based on Fusion of Fingerprint and PalmPrint: A Review

    Get PDF
    This article is an overview of a current multimodal biometrics research based on fingerprint and palm-print. It explains the pervious study for each modal separately and its fusion technique with another biometric modal. The basic biometric system consists of four stages: firstly, the sensor which is used for enrolmen

    Quality-Based Conditional Processing in Multi-Biometrics: Application to Sensor Interoperability

    Full text link
    As biometric technology is increasingly deployed, it will be common to replace parts of operational systems with newer designs. The cost and inconvenience of reacquiring enrolled users when a new vendor solution is incorporated makes this approach difficult and many applications will require to deal with information from different sources regularly. These interoperability problems can dramatically affect the performance of biometric systems and thus, they need to be overcome. Here, we describe and evaluate the ATVS-UAM fusion approach submitted to the quality-based evaluation of the 2007 BioSecure Multimodal Evaluation Campaign, whose aim was to compare fusion algorithms when biometric signals were generated using several biometric devices in mismatched conditions. Quality measures from the raw biometric data are available to allow system adjustment to changing quality conditions due to device changes. This system adjustment is referred to as quality-based conditional processing. The proposed fusion approach is based on linear logistic regression, in which fused scores tend to be log-likelihood-ratios. This allows the easy and efficient combination of matching scores from different devices assuming low dependence among modalities. In our system, quality information is used to switch between different system modules depending on the data source (the sensor in our case) and to reject channels with low quality data during the fusion. We compare our fusion approach to a set of rule-based fusion schemes over normalized scores. Results show that the proposed approach outperforms all the rule-based fusion schemes. We also show that with the quality-based channel rejection scheme, an overall improvement of 25% in the equal error rate is obtained.Comment: Published at IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Human

    A Multimodal and Multi-Algorithmic Architecture for Data Fusion in Biometric Systems

    Get PDF
    Software di autenticazione basato su tratti biometric

    A Multimodal Technique for an Embedded Fingerprint Recognizer in Mobile Payment Systems

    Get PDF
    The development and the diffusion of distributed systems, directly connected to recent communication technologies, move people towards the era of mobile and ubiquitous systems. Distributed systems make merchant-customer relationships closer and more flexible, using reliable e-commerce technologies. These systems and environments need many distributed access points, for the creation and management of secure identities and for the secure recognition of users. Traditionally, these access points can be made possible by a software system with a main central server. This work proposes the study and implementation of a multimodal technique, based on biometric information, for identity management and personal ubiquitous authentication. The multimodal technique uses both fingerprint micro features (minutiae) and fingerprint macro features (singularity points) for robust user authentication. To strengthen the security level of electronic payment systems, an embedded hardware prototype has been also created: acting as self-contained sensors, it performs the entire authentication process on the same device, so that all critical information (e.g. biometric data, account transactions and cryptographic keys), are managed and stored inside the sensor, without any data transmission. The sensor has been prototyped using the Celoxica RC203E board, achieving fast execution time, low working frequency, and good recognition performance

    Genetic And Evolutionary Biometrics:Multiobjective, Multimodal, Feature Selection/Weighting For Tightly Coupled Periocular And Face Recognition

    Get PDF
    The Genetic & Evolutionary Computation (GEC) research community has seen the emergence of a new subarea, referred to as Genetic & Evolutionary Biometrics (GEB), as GECs have been applied to solve a variety of biometric problems. In this dissertation, we present three new GEB techniques for multibiometric recognition: Genetic & Evolutionary Feature Selection (GEFeS), Weighting (GEFeW), and Weighting/Selection (GEFeWS). Instead of selecting the most salient individual features, these techniques evolve subsets of the most salient combinations of features and/or weight features based on their discriminative ability in an effort to increase accuracy while decreasing the overall number of features needed for recognition. We also incorporate cross validation into our best performing technique in an attempt to evolve feature masks (FMs) that also generalize well to unseen subjects and we search the value preference space in an attempt to analyze its impact in respect to optimization and generalization. Our results show that by fusing the periocular biometric with the face, we can achieve higher recognition accuracies than using the two biometric modalities independently. Our results also show that our GEB techniques are able to achieve higher recognition rates than the baseline methods, while using significantly fewer features. In addition, by incorporating machine learning, we were able to create FMs that also generalize well to unseen subjects and use less than 50% of the extracted features. Finally, by searching the value preference space, we were able to determine which weights were most effective in terms of optimization and generalization

    Wavelet–Based Face Recognition Schemes

    Get PDF

    Content Recognition and Context Modeling for Document Analysis and Retrieval

    Get PDF
    The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval. Meanwhile, the processing of documents with unconstrained layouts and complex formatting often requires effective leveraging of broad contextual knowledge. In this dissertation, we first present a novel approach for document image content categorization, using a lexicon of shape features. Each lexical word corresponds to a scale and rotation invariant local shape feature that is generic enough to be detected repeatably and is segmentation free. A concise, structurally indexed shape lexicon is learned by clustering and partitioning feature types through graph cuts. Our idea finds successful application in several challenging tasks, including content recognition of diverse web images and language identification on documents composed of mixed machine printed text and handwriting. Second, we address two fundamental problems in signature-based document image retrieval. Facing continually increasing volumes of documents, detecting and recognizing unique, evidentiary visual entities (\eg, signatures and logos) provides a practical and reliable supplement to the OCR recognition of printed text. We propose a novel multi-scale framework to detect and segment signatures jointly from document images, based on the structural saliency under a signature production model. We formulate the problem of signature retrieval in the unconstrained setting of geometry-invariant deformable shape matching and demonstrate state-of-the-art performance in signature matching and verification. Third, we present a model-based approach for extracting relevant named entities from unstructured documents. In a wide range of applications that require structured information from diverse, unstructured document images, processing OCR text does not give satisfactory results due to the absence of linguistic context. Our approach enables learning of inference rules collectively based on contextual information from both page layout and text features. Finally, we demonstrate the importance of mining general web user behavior data for improving document ranking and other web search experience. The context of web user activities reveals their preferences and intents, and we emphasize the analysis of individual user sessions for creating aggregate models. We introduce a novel algorithm for estimating web page and web site importance, and discuss its theoretical foundation based on an intentional surfer model. We demonstrate that our approach significantly improves large-scale document retrieval performance
    corecore