79,477 research outputs found

    Binary Biometrics: An Analytic Framework to Estimate the Performance Curves Under Gaussian Assumption

    Get PDF
    In recent years, the protection of biometric data has gained increased interest from the scientific community. Methods such as the fuzzy commitment scheme, helper-data system, fuzzy extractors, fuzzy vault, and cancelable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives or error-correcting codes (ECCs) and use a binary representation of the real-valued biometric data. Hence, the difference between two biometric samples is given by the Hamming distance (HD) or bit errors between the binary vectors obtained from the enrollment and verification phases, respectively. If the HD is smaller (larger) than the decision threshold, then the subject is accepted (rejected) as genuine. Because of the use of ECCs, this decision threshold is limited to the maximum error-correcting capacity of the code, consequently limiting the false rejection rate (FRR) and false acceptance rate tradeoff. A method to improve the FRR consists of using multiple biometric samples in either the enrollment or verification phase. The noise is suppressed, hence reducing the number of bit errors and decreasing the HD. In practice, the number of samples is empirically chosen without fully considering its fundamental impact. In this paper, we present a Gaussian analytical framework for estimating the performance of a binary biometric system given the number of samples being used in the enrollment and the verification phase. The error-detection tradeoff curve that combines the false acceptance and false rejection rates is estimated to assess the system performance. The analytic expressions are validated using the Face Recognition Grand Challenge v2 and Fingerprint Verification Competition 2000 biometric databases

    Modelling the pacemaker in event-B: towards methodology for reuse

    No full text
    The cardiac pacemaker is one of the system modelling problems posed to the Formal Methods community by the {\it Grand Challenge for Dependable Systems Evolution} \cite{JOW:06}. The pacemaker is an intricate safety-critical system that supports and moderates the dysfunctional heart's intrinsic electrical control system. This paper focusses on (i) the problem (requirements) domain specification and its mapping to solution (implementation) domain models, (ii) the significant commonality of behaviour between its many operating modes, emphasising the potential for reuse, and (iii) development and verification of models.We introduce the problem and model three of the operating modes in the problem domain using a state machine notation. We then map each of these models into a solution domain state machine notation, designed as shorthand for a refinement-based solution domain development in the Event-B formal language and its RODIN toolki

    Verifying the Mondex Case Study - The KeY Approach

    Get PDF
    The Mondex Case study is still the most substantial contribution to the Grand Challenge repository. It has been the target of a number of formal verification efforts. Those efforts concentrated on correctness proofs for refinement steps of the specification in various specification formalisms using different verification tools. Here, the results of full functional verification of a Javacard implementation of the case study is reported. The functional behavior of the application as well as the security properties to be proven were formalized in JML and verified using the KeY tool, a verification tool for deductive verifying Javacard code. The implementation developed followed, as closely as possible, the concrete layer of the case study\u27s original Z specification. The result demonstrates that, with an appropriate specification language and verification tool, it is possible to bridge the gap between specification and implementation ensuring a fully verified result. The complete material - source code, proofs and binaries of the verification system - is available at http://www.key-project.org/case_studies/mondex.htm

    An integrated formal methods tool-chain and its application to verifying a file system model

    Get PDF
    Tool interoperability as a mean to achieve integration is among the main goals of the international Grand Challenge initiative. In the context of the Verifiable file system mini-challenge put forward by Rajeev Joshi and Gerard Holzmann, this paper focuses on the integration of different formal methods and tools in modelling and verifying an abstract file system inspired by the Intel (R) Flash File System Core. We combine high-level manual specification and proofs with current state of the art mechanical verification tools into a tool-chain which involves Alloy, VDM++ and HOL. The use of (pointfree) relation modelling provides the glue which binds these tools together.Mondrian Project funded by the Portuguese NSF under contract PTDC/EIA-CCO/108302/200

    On the Verified-by-Construction Approach

    No full text

    Towards Practical Graph-Based Verification for an Object-Oriented Concurrency Model

    Get PDF
    To harness the power of multi-core and distributed platforms, and to make the development of concurrent software more accessible to software engineers, different object-oriented concurrency models such as SCOOP have been proposed. Despite the practical importance of analysing SCOOP programs, there are currently no general verification approaches that operate directly on program code without additional annotations. One reason for this is the multitude of partially conflicting semantic formalisations for SCOOP (either in theory or by-implementation). Here, we propose a simple graph transformation system (GTS) based run-time semantics for SCOOP that grasps the most common features of all known semantics of the language. This run-time model is implemented in the state-of-the-art GTS tool GROOVE, which allows us to simulate, analyse, and verify a subset of SCOOP programs with respect to deadlocks and other behavioural properties. Besides proposing the first approach to verify SCOOP programs by automatic translation to GTS, we also highlight our experiences of applying GTS (and especially GROOVE) for specifying semantics in the form of a run-time model, which should be transferable to GTS models for other concurrent languages and libraries.Comment: In Proceedings GaM 2015, arXiv:1504.0244
    • ā€¦
    corecore