200 research outputs found

    Fingerprint Direct-Access Strategy Using Local-Star-Structure-based Discriminator Features: A Comparison Study

    Get PDF
    This paper describes a comparison study of the proposed fingerprint direct-access strategy using local-star-topology-based discriminator features, including internal comparison among different concerned configurations, and external comparison to the other strategies. Through careful minutiae-based feature extraction, hashing-based indexing-retrieval mechanism, variable-threshold-on-score-ratio-based candidate-list reduction technique, and hill-climbing learning process, this strategy was considered promising, as confirmed by the experiment results. For particular aspect of external accuracy comparison, this strategy outperformed the others over three public data sets, i.e. up to Penetration Rate (PR) 5%, it consistently gave lower Error Rate (ER). By taking sample at PR 5%, this strategy produced ER 4%, 10%, and 1% on FVC2000 DB2A, FVC2000 DB3A, and FVC2002 DB1A, respectively. Another perspective if accuracy performance was based on area under curve of graph ER and PR, this strategy neither is the best nor the worst strategy on FVC2000 DB2A and FVC2000 DB3A, while on FVC2002 DB1A it outperfomed the others and even it gave impressive results for index created by three impressions per finger (with or without NT) by ideal step down curve where PR equal to 1% can always be maintained for smaller ER.DOI:http://dx.doi.org/10.11591/ijece.v4i5.658

    On the Performance Improvement of Iris Biometric System

    Get PDF
    Iris is an established biometric modality with many practical applications. Its performance is influenced by noise, database size, and feature representation. This thesis focusses on mitigating these challenges by efficiently characterising iris texture,developing multi-unit iris recognition, reducing the search space of large iris databases, and investigating if iris pattern change over time.To suitably characterise texture features of iris, Scale Invariant Feature Transform (SIFT) is combined with Fourier transform to develop a keypoint descriptor-F-SIFT. Proposed F-SIFT is invariant to transformation, illumination, and occlusion along with strong texture description property. For pairing the keypoints from gallery and probe iris images, Phase-Only Correlation (POC) function is used. The use of phase information reduces the wrong matches generated using SIFT. Results demonstrate the effectiveness of F-SIFT over existing keypoint descriptors.To perform the multi-unit iris fusion, a novel classifier is proposed known as Incremental Granular Relevance Vector Machine (iGRVM) that incorporates incremental and granular learning into RVM. The proposed classifier by design is scalable and unbiased which is particularly suitable for biometrics. The match scores from individual units of iris are passed as an input to the corresponding iGRVM classifier, and the posterior probabilities are combined using weighted sum rule. Experimentally, it is shown that the performance of multi-unit iris recognition improves over single unit iris. For search space reduction, local feature based indexing approaches are developed using multi-dimensional trees. Such features extracted from annular iris images are used to index the database using k-d tree. To handle the scalability issue of k-d tree, k-d-b tree based indexing approach is proposed. Another indexing approach using R-tree is developed to minimise the indexing errors. For retrieval, hybrid coarse-to-fine search strategy is proposed. It is inferred from the results that unification of hybrid search with R-tree significantly improves the identification performance. Iris is assumed to be stable over time. Recently, researchers have reported that false rejections increase over the period of time which in turn degrades the performance. An empirical investigation has been made on standard iris aging databases to find whether iris patterns change over time. From the results, it is found that the rejections are primarily due to the presence of other covariates such as blur, noise, occlusion, pupil dilation, and not due to agin

    A Survey on Soft Biometrics for Human Identification

    Get PDF
    The focus has been changed to multi-biometrics due to the security demands. The ancillary information extracted from primary biometric (face and body) traits such as facial measurements, gender, color of the skin, ethnicity, and height is called soft biometrics and can be integrated to improve the speed and overall system performance of a primary biometric system (e.g., fuse face with facial marks) or to generate human semantic interpretation description (qualitative) of a person and limit the search in the whole dataset when using gender and ethnicity (e.g., old African male with blue eyes) in a fusion framework. This chapter provides a holistic survey on soft biometrics that show major works while focusing on facial soft biometrics and discusses some of the features of extraction and classification techniques that have been proposed and show their strengths and limitations

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Detection of near-duplicates in large image collections

    Get PDF
    The vast numbers of images on the Web include many duplicates, and an even larger number of near-duplicate variants derived from the same original. These include thumbnails stored by search engines, copies shared by various news portals, and images that appear on multiple web sites, legitimately or otherwise. Such near-duplicates appear in the results of many web image searches, and constitute redundancy, and may also represent infringements of copyright. Digital images can be easily altered through simple digital manipulation such as conversion to grey-scale, colour balance change, rescaling, rotation, and cropping. Any of these operations defeat simple duplicate detection methods such as bit-level hashing. The ability to detect such variants with a reasonable degree of reliability and accuracy would support reduction of redundancy in collections and in presentation of search results, and also allow detection of possible copyright violations. Some existing methods for identifying near-duplicates are derived from computer vision techniques; these have shown high effectiveness for this domain, but are computationally expensive, and therefore impractical for large image collections. Other methods address the problem using conventional CBIR approaches that are more efficient but are typically not as robust. None of the previous methods have addressed the problem in its entirety, and none have addressed the large scale near-duplicate problem on the Web; there has been no analysis of the kinds of alterations that are common on the Web, nor any or evaluation of whether real cases of near-duplication can in fact be identified. In this thesis, we analyse the different types of alterations and near-duplicates existent in a range of popular web image searches, and establish a collection and evaluation ground truth using real-world near-duplicate examples. We present a simple ranking approach to reduce the number of local-descriptors, and therefore improve the efficiency of the descriptor-based retrieval method for near-duplicate detection. The descriptor-based method has been shown to produce near-perfect detection of near-duplicates, but was previously computationally very expensive. We show that while maintaining comparable effectiveness, our method scales well for large collections of hundreds of thousands of images. We also explore a more compact indexing structure to support near duplicate image detection. We develop a method to automatically detect the pair-wise near-duplicate relationship of images without the use of a query. We adapt the hash-based probabilistic counting method --- originally used for near-duplicate text document detection --- with the local descriptors; our adaptation offers the first effective and efficient non-query-based approach to this domain. We further incorporate our pair-wise detection approach for clustering of near-duplicates. We present a clustering method specifically for near-duplicate images, where our method is arguably the first clustering method to achieve a high level of effectiveness in this domain. We also show that near-duplicates within a large collection of a million images can be effectively clustered using our approach in less than an hour using relatively modest computational resources. Overall, our proposed methods provide practical approaches to the detection and management of near-duplicate images in large collection
    corecore