861 research outputs found

    Anonymous subject identification and privacy information management in video surveillance

    Get PDF
    The widespread deployment of surveillance cameras has raised serious privacy concerns, and many privacy-enhancing schemes have been recently proposed to automatically redact images of selected individuals in the surveillance video for protection. Of equal importance are the privacy and efficiency of techniques to first, identify those individuals for privacy protection and second, provide access to original surveillance video contents for security analysis. In this paper, we propose an anonymous subject identification and privacy data management system to be used in privacy-aware video surveillance. The anonymous subject identification system uses iris patterns to identify individuals for privacy protection. Anonymity of the iris-matching process is guaranteed through the use of a garbled-circuit (GC)-based iris matching protocol. A novel GC complexity reduction scheme is proposed by simplifying the iris masking process in the protocol. A user-centric privacy information management system is also proposed that allows subjects to anonymously access their privacy information via their iris patterns. The system is composed of two encrypted-domain protocols: The privacy information encryption protocol encrypts the original video records using the iris pattern acquired during the subject identification phase; the privacy information retrieval protocol allows the video records to be anonymously retrieved through a GC-based iris pattern matching process. Experimental results on a public iris biometric database demonstrate the validity of our framework

    A Formal Study of the Privacy Concerns in Biometric-Based Remote Authentication Schemes

    Get PDF
    With their increasing popularity in cryptosystems, biometrics have attracted more and more attention from the information security community. However, how to handle the relevant privacy concerns remains to be troublesome. In this paper, we propose a novel security model to formalize the privacy concerns in biometric-based remote authentication schemes. Our security model covers a number of practical privacy concerns such as identity privacy and transaction anonymity, which have not been formally considered in the literature. In addition, we propose a general biometric-based remote authentication scheme and prove its security in our security model

    THRIVE: Threshold Homomorphic encryption based secure and privacy preserving bIometric VErification system

    Get PDF
    In this paper, we propose a new biometric verification and template protection system which we call the THRIVE system. The system includes novel enrollment and authentication protocols based on threshold homomorphic cryptosystem where the private key is shared between a user and the verifier. In the THRIVE system, only encrypted binary biometric templates are stored in the database and verification is performed via homomorphically randomized templates, thus, original templates are never revealed during the authentication stage. The THRIVE system is designed for the malicious model where the cheating party may arbitrarily deviate from the protocol specification. Since threshold homomorphic encryption scheme is used, a malicious database owner cannot perform decryption on encrypted templates of the users in the database. Therefore, security of the THRIVE system is enhanced using a two-factor authentication scheme involving the user's private key and the biometric data. We prove security and privacy preservation capability of the proposed system in the simulation-based model with no assumption. The proposed system is suitable for applications where the user does not want to reveal her biometrics to the verifier in plain form but she needs to proof her physical presence by using biometrics. The system can be used with any biometric modality and biometric feature extraction scheme whose output templates can be binarized. The overall connection time for the proposed THRIVE system is estimated to be 336 ms on average for 256-bit biohash vectors on a desktop PC running with quad-core 3.2 GHz CPUs at 10 Mbit/s up/down link connection speed. Consequently, the proposed system can be efficiently used in real life applications

    Privacy-Preserving Facial Recognition Using Biometric-Capsules

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)In recent years, developers have used the proliferation of biometric sensors in smart devices, along with recent advances in deep learning, to implement an array of biometrics-based recognition systems. Though these systems demonstrate remarkable performance and have seen wide acceptance, they present unique and pressing security and privacy concerns. One proposed method which addresses these concerns is the elegant, fusion-based Biometric-Capsule (BC) scheme. The BC scheme is provably secure, privacy-preserving, cancellable and interoperable in its secure feature fusion design. In this work, we demonstrate that the BC scheme is uniquely fit to secure state-of-the-art facial verification, authentication and identification systems. We compare the performance of unsecured, underlying biometrics systems to the performance of the BC-embedded systems in order to directly demonstrate the minimal effects of the privacy-preserving BC scheme on underlying system performance. Notably, we demonstrate that, when seamlessly embedded into a state-of-the-art FaceNet and ArcFace verification systems which achieve accuracies of 97.18% and 99.75% on the benchmark LFW dataset, the BC-embedded systems are able to achieve accuracies of 95.13% and 99.13% respectively. Furthermore, we also demonstrate that the BC scheme outperforms or performs as well as several other proposed secure biometric methods

    Conflict Detection for Edits on Extended Feature Models using Symbolic Graph Transformation

    Full text link
    Feature models are used to specify variability of user-configurable systems as appearing, e.g., in software product lines. Software product lines are supposed to be long-living and, therefore, have to continuously evolve over time to meet ever-changing requirements. Evolution imposes changes to feature models in terms of edit operations. Ensuring consistency of concurrent edits requires appropriate conflict detection techniques. However, recent approaches fail to handle crucial subtleties of extended feature models, namely constraints mixing feature-tree patterns with first-order logic formulas over non-Boolean feature attributes with potentially infinite value domains. In this paper, we propose a novel conflict detection approach based on symbolic graph transformation to facilitate concurrent edits on extended feature models. We describe extended feature models formally with symbolic graphs and edit operations with symbolic graph transformation rules combining graph patterns with first-order logic formulas. The approach is implemented by combining eMoflon with an SMT solver, and evaluated with respect to applicability.Comment: In Proceedings FMSPLE 2016, arXiv:1603.0857

    FAIRness and Usability for Open-access Omics Data Systems

    Get PDF
    Omics data sharing is crucial to the biological research community, and the last decade or two has seen a huge rise in collaborative analysis systems, databases, and knowledge bases for omics and other systems biology data. We assessed the FAIRness of NASAs GeneLab Data Systems (GLDS) along with four similar kinds of systems in the research omics data domain, using 14 FAIRness metrics. The range of overall FAIRness scores was 6-12 (out of 14), average 10.1, and standard deviation 2.4. The range of Pass ratings for the metrics was 29-79%, Partial Pass 0-21%, and Fail 7-50%. The systems we evaluated performed the best in the areas of data findability and accessibility, and worst in the area of data interoperability. Reusability of metadata, in particular, was frequently not well supported. We relate our experiences implementing semantic integration of omics data from some of the assessed systems for federated querying and retrieval functions, given their shortcomings in data interoperability. Finally, we propose two new principles that Big Data system developers, in particular, should consider for maximizing data accessibility

    Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

    Full text link
    Authentication systems are vulnerable to model inversion attacks where an adversary is able to approximate the inverse of a target machine learning model. Biometric models are a prime candidate for this type of attack. This is because inverting a biometric model allows the attacker to produce a realistic biometric input to spoof biometric authentication systems. One of the main constraints in conducting a successful model inversion attack is the amount of training data required. In this work, we focus on iris and facial biometric systems and propose a new technique that drastically reduces the amount of training data necessary. By leveraging the output of multiple models, we are able to conduct model inversion attacks with 1/10th the training set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the training set size of Mai et al. (Pattern Analysis and Machine Intelligence 2019) for facial data. We denote our new attack technique as structured random with alignment loss. Our attacks are black-box, requiring no knowledge of the weights of the target neural network, only the dimension, and values of the output vector. To show the versatility of the alignment loss, we apply our attack framework to the task of membership inference (Shokri et al., IEEE S&P 2017) on biometric data. For the iris, membership inference attack against classification networks improves from 52% to 62% accuracy.Comment: This is a major revision of a paper titled "Inverting Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models" by the same authors that appears at IJCB 202

    FORGE: An eLearning Framework for Remote Laboratory Experimentation on FIRE Testbed Infrastructure

    Get PDF
    The Forging Online Education through FIRE (FORGE) initiative provides educators and learners in higher education with access to world-class FIRE testbed infrastructure. FORGE supports experimentally driven research in an eLearning environment by complementing traditional classroom and online courses with interactive remote laboratory experiments. The project has achieved its objectives by defining and implementing a framework called FORGEBox. This framework offers the methodology, environment, tools and resources to support the creation of HTML-based online educational material capable accessing virtualized and physical FIRE testbed infrastruc- ture easily. FORGEBox also captures valuable quantitative and qualitative learning analytic information using questionnaires and Learning Analytics that can help optimise and support student learning. To date, FORGE has produced courses covering a wide range of networking and communication domains. These are freely available from FORGEBox.eu and have resulted in over 24,000 experiments undertaken by more than 1,800 students across 10 countries worldwide. This work has shown that the use of remote high- performance testbed facilities for hands-on remote experimentation can have a valuable impact on the learning experience for both educators and learners. Additionally, certain challenges in developing FIRE-based courseware have been identified, which has led to a set of recommendations in order to support the use of FIRE facilities for teaching and learning purposes

    Strong key derivation from noisy sources

    Get PDF
    A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings. A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee. This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors. First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation. Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source. Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches. The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security
    corecore