128,008 research outputs found

    Biometric identification and recognition for iris using failure rejection rate (FRR) / Musab A. M. Ali

    Get PDF
    Iris recognition is reckoned as one of the most reliable biometrics for identification purpose in terms of reliability and accuracy. Hence, the objectives of this research are new algorithms development significantly for iris segmentation specifically the proposed Fusion of Profile and Mask Technique (FPM) specifically in getting the actual center of the pupil with high level of accuracy prior to iris localization task, followed by a particular enhancement in iris normalization that is the application of quarter size of an iris image (instead of processing a whole or half size of an iris image) and for better precision and faster recognition with the robust Support Vector Machine (SVM) as classifier. Further aim of this research is the integration of cancelable biometrics feature in the proposed iris recognition technique via non-invertible transformation which determines the feature transformation-based template protection techniques security. Therefore, it is significant to formulate the non-invertibility measure to circumvent the possibility of adversary having the capability in guessing the original biometric providing that the transformed template is obtained. At any process of recognition stage, the biometric data is protected and also whenever there is a compromise to any information in the database it will be on the cancelable biometric template merely without affecting the original biometric information. In order to evaluate and verify the effectiveness of the proposed technique, CASIA-A (version 3.1) and Bath-A iris databases have been selected for performance testing. Briefly, the processes of the iris recognition system proposed in this research work are locating the pupil first via the novel technique that is the Fusion of Profile and Mask (FPM) Technique focusing on getting the actual center of the pupil then followed by localizing the actual iris region with the circular Hough transform. Next, select smaller yet optimal and effective normalized iris image size by applying different normalization factors. Instead of processing a whole or half size of an iris image, the 480 code size which is equivalent to the quarter size of an iris is selected due to its outstandingly accurate results and less computational complexity. The subsequent step is using the DAUB3 wavelet transform for feature extraction along with the application of an additional step for biometric template security that is the Non-invertible transform (cancelable biometrics method) and finally utilizing the Support Vector Machine (Non-linear Quadratic kernel) for matching/classification. The experimental results showed that the recognition rate achieved are of 99.9% on Bath-A data set, with a maximum decision criterion of 0.97

    Biologically Inspired Approaches to Automated Feature Extraction and Target Recognition

    Full text link
    Ongoing research at Boston University has produced computational models of biological vision and learning that embody a growing corpus of scientific data and predictions. Vision models perform long-range grouping and figure/ground segmentation, and memory models create attentionally controlled recognition codes that intrinsically cornbine botton-up activation and top-down learned expectations. These two streams of research form the foundation of novel dynamically integrated systems for image understanding. Simulations using multispectral images illustrate road completion across occlusions in a cluttered scene and information fusion from incorrect labels that are simultaneously inconsistent and correct. The CNS Vision and Technology Labs (cns.bu.edulvisionlab and cns.bu.edu/techlab) are further integrating science and technology through analysis, testing, and development of cognitive and neural models for large-scale applications, complemented by software specification and code distribution.Air Force Office of Scientific Research (F40620-01-1-0423); National Geographic-Intelligence Agency (NMA 201-001-1-2016); National Science Foundation (SBE-0354378; BCS-0235298); Office of Naval Research (N00014-01-1-0624); National Geospatial-Intelligence Agency and the National Society of Siegfried Martens (NMA 501-03-1-2030, DGE-0221680); Department of Homeland Security graduate fellowshi

    Improving acoustic vehicle classification by information fusion

    No full text
    We present an information fusion approach for ground vehicle classification based on the emitted acoustic signal. Many acoustic factors can contribute to the classification accuracy of working ground vehicles. Classification relying on a single feature set may lose some useful information if its underlying sound production model is not comprehensive. To improve classification accuracy, we consider an information fusion diagram, in which various aspects of an acoustic signature are taken into account and emphasized separately by two different feature extraction methods. The first set of features aims to represent internal sound production, and a number of harmonic components are extracted to characterize the factors related to the vehicle’s resonance. The second set of features is extracted based on a computationally effective discriminatory analysis, and a group of key frequency components are selected by mutual information, accounting for the sound production from the vehicle’s exterior parts. In correspondence with this structure, we further put forward a modifiedBayesian fusion algorithm, which takes advantage of matching each specific feature set with its favored classifier. To assess the proposed approach, experiments are carried out based on a data set containing acoustic signals from different types of vehicles. Results indicate that the fusion approach can effectively increase classification accuracy compared to that achieved using each individual features set alone. The Bayesian-based decision level fusion is found fusion is found to be improved than a feature level fusion approac
    corecore