120 research outputs found

    Binary Biometrics: An Analytic Framework to Estimate the Performance Curves Under Gaussian Assumption

    Get PDF
    In recent years, the protection of biometric data has gained increased interest from the scientific community. Methods such as the fuzzy commitment scheme, helper-data system, fuzzy extractors, fuzzy vault, and cancelable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives or error-correcting codes (ECCs) and use a binary representation of the real-valued biometric data. Hence, the difference between two biometric samples is given by the Hamming distance (HD) or bit errors between the binary vectors obtained from the enrollment and verification phases, respectively. If the HD is smaller (larger) than the decision threshold, then the subject is accepted (rejected) as genuine. Because of the use of ECCs, this decision threshold is limited to the maximum error-correcting capacity of the code, consequently limiting the false rejection rate (FRR) and false acceptance rate tradeoff. A method to improve the FRR consists of using multiple biometric samples in either the enrollment or verification phase. The noise is suppressed, hence reducing the number of bit errors and decreasing the HD. In practice, the number of samples is empirically chosen without fully considering its fundamental impact. In this paper, we present a Gaussian analytical framework for estimating the performance of a binary biometric system given the number of samples being used in the enrollment and the verification phase. The error-detection tradeoff curve that combines the false acceptance and false rejection rates is estimated to assess the system performance. The analytic expressions are validated using the Face Recognition Grand Challenge v2 and Fingerprint Verification Competition 2000 biometric databases

    Neuro-inspired edge feature fusion using Choquet integrals

    Get PDF
    It is known that the human visual system performs a hierarchical information process in which early vision cues (or primitives) are fused in the visual cortex to compose complex shapes and descriptors. While different aspects of the process have been extensively studied, such as lens adaptation or feature detection, some other aspects, such as feature fusion, have been mostly left aside. In this work, we elaborate on the fusion of early vision primitives using generalizations of the Choquet integral, and novel aggregation operators that have been extensively studied in recent years. We propose to use generalizations of the Choquet integral to sensibly fuse elementary edge cues, in an attempt to model the behaviour of neurons in the early visual cortex. Our proposal leads to a fully-framed edge detection algorithm whose performance is put to the test in state-of-the-art edge detection datasets.The authors gratefully acknowledge the financial support of the Spanish Ministry of Science and Technology (project PID2019-108392GB-I00 (AEI/10.13039/501100011033), the Research Services of Universidad Pública de Navarra, CNPq (307781/2016-0, 301618/2019-4), FAPERGS (19/2551-0001660) and PNPD/CAPES (464880/2019-00)

    Text Detection and Pose Estimation for a Reading Robot

    Get PDF

    Vehicle Detection and Tracking Techniques: A Concise Review

    Get PDF
    Vehicle detection and tracking applications play an important role for civilian and military applications such as in highway traffic surveillance control, management and urban traffic planning. Vehicle detection process on road are used for vehicle tracking, counts, average speed of each individual vehicle, traffic analysis and vehicle categorizing objectives and may be implemented under different environments changes. In this review, we present a concise overview of image processing methods and analysis tools which used in building these previous mentioned applications that involved developing traffic surveillance systems. More precisely and in contrast with other reviews, we classified the processing methods under three categories for more clarification to explain the traffic systems

    Enhanced detection of point correspondences in single-shot structured light

    Get PDF
    The crucial role of point correspondences in the process of stereo vision and camera projector calibration is to determine the relationship between the camera view(s) and the projector view(s). Consequently, acquiring accurate and robust point correspondences can result in a very accurate 3D point cloud of a scene. Designing a method that can detect pixel correspondences quickly and accurately and be robust to factors such as object motions and color is an important subject of study. The information that lies in the point correspondences determines the geometry of the scene in which depth plays a very important role, if not the most important. However, point correspondences will include some outliers. Outlier removal is another important aspect of obtaining correspondences that can have substantial impact on the reconstructed point cloud of an object. During the Single-Shot Structured Light (SSSL) calibration process, a pattern consisting of tags with differently shaped symbols inside and separated by grids are projected onto the object. The intersections of these grid lines are considered to be potential pixel correspondences between a camera image and the projector pattern. The purpose of this thesis is to study the robustness and accuracy of pixel correspondences and to enhance their quality. In this thesis we propose a detection method that uses the model of the pattern, specifically the grid lines, which are the largest and brightest feature of the pattern. The input image is partitioned into smaller patches and then the optimization process is executed on each patch. Eventually, the grid lines will be detected and fitted to the grid, and the intersections of those lines are taken as potential corresponding pixels between the views. In order to remove incorrect pixel correspondences, or in other words, outliers, Connected Component Analysis is used to find the closest detected point to the top left corner of each tag. The points remaining after this step are the correct pixel correspondences. Experimental results show the improvement of using a locally adaptive thresholding method against the baseline in detecting tags. The proposed thresholding method showed a maintained accuracy compared to the baseline method while automatically tune all the parameters whereas in the baseline method some parameters need fine tuning. Introduced model-based grid intersection detection yields an approximately 50 times improvement in speed. Inaccuracy in point correspondences are compared with state-of-the-art method based on the generated final reconstructed point clouds using both methods against the CAD model as ground truth. Results show an average of 3 pixels higher error in distance, between the reconstructed point clouds and the CAD model, in the proposed method compared to the baseline

    The Research of Disease Spots Extraction Based on Evolutionary Algorithm

    Get PDF

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
    corecore