2,731 research outputs found

    Binary Biometrics: An Analytic Framework to Estimate the Performance Curves Under Gaussian Assumption

    Get PDF
    In recent years, the protection of biometric data has gained increased interest from the scientific community. Methods such as the fuzzy commitment scheme, helper-data system, fuzzy extractors, fuzzy vault, and cancelable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives or error-correcting codes (ECCs) and use a binary representation of the real-valued biometric data. Hence, the difference between two biometric samples is given by the Hamming distance (HD) or bit errors between the binary vectors obtained from the enrollment and verification phases, respectively. If the HD is smaller (larger) than the decision threshold, then the subject is accepted (rejected) as genuine. Because of the use of ECCs, this decision threshold is limited to the maximum error-correcting capacity of the code, consequently limiting the false rejection rate (FRR) and false acceptance rate tradeoff. A method to improve the FRR consists of using multiple biometric samples in either the enrollment or verification phase. The noise is suppressed, hence reducing the number of bit errors and decreasing the HD. In practice, the number of samples is empirically chosen without fully considering its fundamental impact. In this paper, we present a Gaussian analytical framework for estimating the performance of a binary biometric system given the number of samples being used in the enrollment and the verification phase. The error-detection tradeoff curve that combines the false acceptance and false rejection rates is estimated to assess the system performance. The analytic expressions are validated using the Face Recognition Grand Challenge v2 and Fingerprint Verification Competition 2000 biometric databases

    Application of transport techniques to the analysis of NERVA shadow shields

    Get PDF
    A radiation shield internal to the NERVA nuclear rocket reactor required to limit the neutron and photon radiation levels at critical components located external to the reactor was evaluated. Two significantly different shield mockups were analyzed: BATH, a composite mixture of boron carbide, aluminum and titanium hydride, and a borated steel-liquid hydrogen system. Based on the comparisons between experimental and calculated neutron and photon radiation levels, the following conclusions were noted: (1) The ability of two-dimensional discrete ordinates code to predict the radiation levels internal to and at the surface of the shield mockups was clearly demonstrated. (2) Internal to the BATH shield mockups, the one-dimensional technique predicted the axial variation of neutron fluxes and photon dose rates; however, the magnitude of the neutron fluxes was about a factor of 1.8 lower than the two-dimensional analysis and the photon dose rate was a factor of 1.3 lower

    Fingerprint Verification Using Spectral Minutiae Representations

    Get PDF
    Most fingerprint recognition systems are based on the use of a minutiae set, which is an unordered collection of minutiae locations and orientations suffering from various deformations such as translation, rotation, and scaling. The spectral minutiae representation introduced in this paper is a novel method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with template protection schemes that require a fixed-length feature vector. This paper introduces the concept of algorithms for two representation methods: the location-based spectral minutiae representation and the orientation-based spectral minutiae representation. Both algorithms are evaluated using two correlation-based spectral minutiae matching algorithms. We present the performance of our algorithms on three fingerprint databases. We also show how the performance can be improved by using a fusion scheme and singular points

    Skew-Unfolding the Skorokhod Reflection of a Continuous Semimartingale

    Full text link
    The Skorokhod reflection of a continuous semimartingale is unfolded, in a possibly skewed manner, into another continuous semimartingale on an enlarged probability space according to the excursion-theoretic methodology of Prokaj (2009). This is done in terms of a skew version of the Tanaka equation, whose properties are studied in some detail. The result is used to construct a system of two diffusive particles with rank-based characteristics and skew-elastic collisions. Unfoldings of conventional reflections are also discussed, as are examples involving skew Brownian Motions and skew Bessel processes.Comment: 20 pages. typos corrected, added a remark after Proposition 2.3, simplified the last part of Example 2.

    Solving procedure for a twenty-five diagonal coefficient matrix: direct numerical solutions of the three dimensional linear Fokker-Planck equation

    Full text link
    We describe an implicit procedure for solving linear equation systems resulting from the discretization of the three dimensional (seven variables) linear Fokker-Planck equation. The discretization of the Fokker-Planck equation is performed using a twenty-five point molecule that leads to a coefficient matrix with equal number of diagonals. The method is an extension of Stone's implicit procedure, includes a vast class of collision terms and can be applied to stationary or non stationary problems with different discretizations in time. Test calculations and comparisons with other methods are presented in two stationary examples, including an astrophysical application for the Miyamoto-Nagai disk potential for a typical galaxy.Comment: 20 pages, RevTex, no proofreading, accepted in Journal of Computational Physic

    Fully Dynamic Algorithm for Top-kk Densest Subgraphs

    Full text link
    Given a large graph, the densest-subgraph problem asks to find a subgraph with maximum average degree. When considering the top-kk version of this problem, a na\"ive solution is to iteratively find the densest subgraph and remove it in each iteration. However, such a solution is impractical due to high processing cost. The problem is further complicated when dealing with dynamic graphs, since adding or removing an edge requires re-running the algorithm. In this paper, we study the top-kk densest-subgraph problem in the sliding-window model and propose an efficient fully-dynamic algorithm. The input of our algorithm consists of an edge stream, and the goal is to find the node-disjoint subgraphs that maximize the sum of their densities. In contrast to existing state-of-the-art solutions that require iterating over the entire graph upon any update, our algorithm profits from the observation that updates only affect a limited region of the graph. Therefore, the top-kk densest subgraphs are maintained by only applying local updates. We provide a theoretical analysis of the proposed algorithm and show empirically that the algorithm often generates denser subgraphs than state-of-the-art competitors. Experiments show an improvement in efficiency of up to five orders of magnitude compared to state-of-the-art solutions.Comment: 10 pages, 8 figures, accepted at CIKM 201

    Data Mining Using Relational Database Management Systems

    Get PDF
    Software packages providing a whole set of data mining and machine learning algorithms are attractive because they allow experimentation with many kinds of algorithms in an easy setup. However, these packages are often based on main-memory data structures, limiting the amount of data they can handle. In this paper we use a relational database as secondary storage in order to eliminate this limitation. Unlike existing approaches, which often focus on optimizing a single algorithm to work with a database backend, we propose a general approach, which provides a database interface for several algorithms at once. We have taken a popular machine learning software package, Weka, and added a relational storage manager as back-tier to the system. The extension is transparent to the algorithms implemented in Weka, since it is hidden behind Weka’s standard main-memory data structure interface. Furthermore, some general mining tasks are transfered into the database system to speed up execution. We tested the extended system, refered to as WekaDB, and our results show that it achieves a much higher scalability than Weka, while providing the same output and maintaining good computation time
    corecore