19 research outputs found

    Civilian Target Recognition using Hierarchical Fusion

    Get PDF
    The growth of computer vision technology has been marked by attempts to imitate human behavior to impart robustness and confidence to the decision making process of automated systems. Examples of disciplines in computer vision that have been targets of such efforts are Automatic Target Recognition (ATR) and fusion. ATR is the process of aided or unaided target detection and recognition using data from different sensors. Usually, it is synonymous with its military application of recognizing battlefield targets using imaging sensors. Fusion is the process of integrating information from different sources at the data or decision levels so as to provide a single robust decision as opposed to multiple individual results. This thesis combines these two research areas to provide improved classification accuracy in recognizing civilian targets. The results obtained reaffirm that fusion techniques tend to improve the recognition rates of ATR systems. Previous work in ATR has mainly dealt with military targets and single level of data fusion. Expensive sensors and time-consuming algorithms are generally used to improve system performance. In this thesis, civilian target recognition, which is considered to be harder than military target recognition, is performed. Inexpensive sensors are used to keep the system cost low. In order to compensate for the reduced system ability, fusion is performed at two different levels of the ATR system { event level and sensor level. Only preliminary image processing and pattern recognition techniques have been used so as to maintain low operation times. High classification rates are obtained using data fusion techniques alone. Another contribution of this thesis is the provision of a single framework to perform all operations from target data acquisition to the final decision making. The Sensor Fusion Testbed (SFTB) designed by Northrop Grumman Systems has been used by the Night Vision & Electronic Sensors Directorate to obtain images of seven different types of civilian targets. Image segmentation is performed using background subtraction. The seven invariant moments are extracted from the segmented image and basic classification is performed using k Nearest Neighbor method. Cross-validation is used to provide a better idea of the classification ability of the system. Temporal fusion at the event level is performed using majority voting and sensor level fusion is done using Behavior-Knowledge Space method. Two separate databases were used. The first database uses seven targets (2 cars, 2 SUVs, 2 trucks and 1 stake body light truck). Individual frame, temporal fusion and BKS fusion results are around 65%, 70% and 77% respectively. The second database has three targets (cars, SUVs and trucks) formed by combining classes from the first database. Higher classification accuracies are observed here. 75%, 90% and 95% recognition rates are obtained at frame, event and sensor levels. It can be seen that, on an average, recognition accuracy improves with increasing levels of fusion. Also, distance-based classification was performed to study the variation of system performance with the distance of the target from the cameras. The results are along expected lines and indicate the efficacy of fusion techniques for the ATR problem. Future work using more complex image processing and pattern recognition routines can further improve the classification performance of the system. The SFTB can be equipped with these algorithms and field-tested to check real-time performance

    A Survey of Biometric Recognition Systems in E-Business Transactions

    Get PDF
    The global expansion of e-business applications has introduced novel challenges, with an escalating number of security issues linked to online transactions, such as phishing attacks and identity theft. E-business involves conducting buying and selling activities online, facilitated by the Internet. The application of biometrics has been proposed as a solution to mitigate security concerns in e- business transactions. Biometric recognition involves the use of automated techniques to validate an individual's identity based on both physiological and behavioural characteristics. This research focuses specifically on implementing a multimodal biometric recognition system that incorporates face and fingerprint data to enhance the security of e-business transactions. In contrast to unimodal systems relying on a single biometric modality, this approach addresses limitations such as noise, universality, and variations in both interclass and intraclass scenarios. The study emphasizes the advantages of multimodal biometric systems while shedding light on vulnerabilities in biometrics within the e- business context. This in-depth analysis serves as a valuable resource for those exploring the intersection of e-business and biometrics, providing insights into the strengths, challenges, and best practices for stakeholders in this domain. Finally, the paper concludes with a summary and outlines potential avenues for future research

    Threshold-optimized decision-level fusion and its application to biometrics

    Get PDF
    Fusion is a popular practice to increase the reliability of biometric verification. In this paper, we propose an optimal fusion scheme at decision level by the AND or OR rule, based on optimizing matching score thresholds. The proposed fusion scheme will always give an improvement in the Neyman–Pearson sense over the component classifiers that are fused. The theory of the threshold-optimized decision-level fusion is presented, and the applications are discussed. Fusion experiments are done on the FRGC database which contains 2D texture data and 3D shape data. The proposed decision fusion improves the system performance, in a way comparable to or better than the conventional score-level fusion. It is noteworthy that in practice, the threshold-optimized decision-level fusion by the OR rule is especially useful in presence of outliers

    Analysis of Score-Level Fusion Rules for Deepfake Detection

    Get PDF
    Deepfake detection is of fundamental importance to preserve the reliability of multimedia communications. Modern deepfake detection systems are often specialized on one or more types of manipulation but are not able to generalize. On the other hand, when properly designed, ensemble learning and fusion techniques can reduce this issue. In this paper, we exploit the complementarity of different individual classifiers and evaluate which fusion rules are best suited to increase the generalization capacity of modern deepfake detection systems. We also give some insights to designers for selecting the most appropriate approach

    Multimodal biometric system for ECG, ear and iris recognition based on local descriptors

    Get PDF
    © 2019, Springer Science+Business Media, LLC, part of Springer Nature. Combination of multiple information extracted from different biometric modalities in multimodal biometric recognition system aims to solve the different drawbacks encountered in a unimodal biometric system. Fusion of many biometrics has proposed such as face, fingerprint, iris…etc. Recently, electrocardiograms (ECG) have been used as a new biometric technology in unimodal and multimodal biometric recognition system. ECG provides inherent the characteristic of liveness of a person, making it hard to spoof compared to other biometric techniques. Ear biometrics present a rich and stable source of information over an acceptable period of human life. Iris biometrics have been embedded with different biometric modalities such as fingerprint, face and palm print, because of their higher accuracy and reliability. In this paper, a new multimodal biometric system based ECG-ear-iris biometrics at feature level is proposed. Preprocessing techniques including normalization and segmentation are applied to ECG, ear and iris biometrics. Then, Local texture descriptors, namely 1D-LBP (One D-Local Binary Patterns), Shifted-1D-LBP and 1D-MR-LBP (Multi-Resolution) are used to extract the important features from the ECG signal and convert the ear and iris images to a 1D signals. KNN and RBF are used for matching to classify an unknown user into the genuine or impostor. The developed system is validated using the benchmark ID-ECG and USTB1, USTB2 and AMI ear and CASIA v1 iris databases. The experimental results demonstrate that the proposed approach outperforms unimodal biometric system. A Correct Recognition Rate (CRR) of 100% is achieved with an Equal Error Rate (EER) of 0.5%

    A Comprehensive Mapping and Real-World Evaluation of Multi-Object Tracking on Automated Vehicles

    Get PDF
    Multi-Object Tracking (MOT) is a field critical to Automated Vehicle (AV) perception systems. However, it is large, complex, spans research fields, and lacks resources for integration with real sensors and implementation on AVs. Factors such those make it difficult for new researchers and practitioners to enter the field. This thesis presents two main contributions: 1) a comprehensive mapping for the field of Multi-Object Trackers (MOTs) with a specific focus towards Automated Vehicles (AVs) and 2) a real-world evaluation of an MOT developed and tuned using COTS (Commercial Off-The-Shelf) software toolsets. The first contribution aims to give a comprehensive overview of MOTs and various MOT subfields for AVs that have not been presented as wholistically in other papers. The second contribution aims to illustrate some of the benefits of using a COTS MOT toolset and some of the difficulties associated with using real-world data. This MOT performed accurate state estimation of a target vehicle through the tracking and fusion of data from a radar and vision sensor using a Central-Level Track Processing approach and a Global Nearest Neighbors assignment algorithm. It had an 0.44 m positional Root Mean Squared Error (RMSE) over a 40 m approach test. It is the authors\u27 hope that this work provides an overview of the MOT field that will help new researchers and practitioners enter the field. Additionally, the author hopes that the evaluation section illustrates some difficulties of using real-world data and provides a good pathway for developing and deploying MOTs from software toolsets to Automated Vehicles

    Personal Identification based on Multi Biometric Traits

    Get PDF
    The biometric system that based on single biometric measure (Unimodal) are usually contained variety of problems and limitation like noisy data, does not provide high security and non-university, so we used the multibiometric system to improve the recognition rate, get better security than the unimodal systems and higher efficiency. This study aims to identify a person by using multibiometric traits (Signature, Face and Fingerprint) by using different technique (Singular Value Decomposition (SVD,PCA and wavelet energy). The quality and accuracy of the identification and recognition of the person are measured in this system by computing the Peak Signal to Noise Ratio (PSNR) and the Mean Square Error (MSE) for face, fingerprint, and signatur
    corecore