96 research outputs found

    Biometric Cryptosystems : Authentication, Encryption and Signature for Biometric Identities

    Get PDF
    Biometrics have been used for secure identification and authentication for more than two decades since biometric data is unique, non-transferable, unforgettable, and always with us. Recently, biometrics has pervaded other aspects of security applications that can be listed under the topic of ``Biometric Cryptosystems''. Although the security of some of these systems is questionable when they are utilized alone, integration with other technologies such as digital signatures or Identity Based Encryption (IBE) schemes results in cryptographically secure applications of biometrics. It is exactly this field of biometric cryptosystems that we focused in this thesis. In particular, our goal is to design cryptographic protocols for biometrics in the framework of a realistic security model with a security reduction. Our protocols are designed for biometric based encryption, signature and remote authentication. We first analyze the recently introduced biometric remote authentication schemes designed according to the security model of Bringer et al.. In this model, we show that one can improve the database storage cost significantly by designing a new architecture, which is a two-factor authentication protocol. This construction is also secure against the new attacks we present, which disprove the claimed security of remote authentication schemes, in particular the ones requiring a secure sketch. Thus, we introduce a new notion called ``Weak-identity Privacy'' and propose a new construction by combining cancelable biometrics and distributed remote authentication in order to obtain a highly secure biometric authentication system. We continue our research on biometric remote authentication by analyzing the security issues of multi-factor biometric authentication (MFBA). We formally describe the security model for MFBA that captures simultaneous attacks against these systems and define the notion of user privacy, where the goal of the adversary is to impersonate a client to the server. We design a new protocol by combining bipartite biotokens, homomorphic encryption and zero-knowledge proofs and provide a security reduction to achieve user privacy. The main difference of this MFBA protocol is that the server-side computations are performed in the encrypted domain but without requiring a decryption key for the authentication decision of the server. Thus, leakage of the secret key of any system component does not affect the security of the scheme as opposed to the current biometric systems involving cryptographic techniques. We also show that there is a tradeoff between the security level the scheme achieves and the requirement for making the authentication decision without using any secret key. In the second part of the thesis, we delve into biometric-based signature and encryption schemes. We start by designing a new biometric IBS system that is based on the currently most efficient pairing based signature scheme in the literature. We prove the security of our new scheme in the framework of a stronger model compared to existing adversarial models for fuzzy IBS, which basically simulates the leakage of partial secret key components of the challenge identity. In accordance with the novel features of this scheme, we describe a new biometric IBE system called as BIO-IBE. BIO-IBE differs from the current fuzzy systems with its key generation method that not only allows for a larger set of encryption systems to function for biometric identities, but also provides a better accuracy/identification of the users in the system. In this context, BIO-IBE is the first scheme that allows for the use of multi-modal biometrics to avoid collision attacks. Finally, BIO-IBE outperforms the current schemes and for small-universe of attributes, it is secure in the standard model with a better efficiency compared to its counterpart. Another contribution of this thesis is the design of biometric IBE systems without using pairings. In fact, current fuzzy IBE schemes are secure under (stronger) bilinear assumptions and the decryption of each message requires pairing computations almost equal to the number of attributes defining the user. Thus, fuzzy IBE makes error-tolerant encryption possible at the expense of efficiency and security. Hence, we design a completely new construction for biometric IBE based on error-correcting codes, generic conversion schemes and weakly secure anonymous IBE schemes that encrypt a message bit by bit. The resulting scheme is anonymous, highly secure and more efficient compared to pairing-based biometric IBE, especially for the decryption phase. The security of our generic construction is reduced to the security of the anonymous IBE scheme, which is based on the Quadratic Residuosity assumption. The binding of biometric features to the user's identity is achieved similar to BIO-IBE, thus, preserving the advantages of its key generation procedure

    The effects of user-AI co-creation on UI design tasks

    Get PDF
    With the boost of GPU computation power and the developments of neural networks in the recent decade, a lot of AI technique are invented and show bright potential of improving human tasks. GAN (generative adversarial network) as one of recent AI technique has powerful ability to perform image generation tasks. Besides, many researchers are working on exploring the potentials and understand user-AI collaboration by developing prototype with the help of neural networks (such as GAN). Unlike previous works focus on simple sketch task, this work studied the user experience with UI design task to understand how AI could improve or harm the user experience within practical and complex design tasks. The findings are as follows: multiple-hint AI turned out to be more user-friendly, and it is im-portant to study and understand how AI’s presentation should be designed for user-AI col-laboration. Based on these findings and previous works, this research discussed about what factors should be taken into consideration when designing user-AI collaboration tool

    Foundation Models and Fair Use

    Full text link
    Existing foundation models are trained on copyrighted material. Deploying these models can pose both legal and ethical risks when data creators fail to receive appropriate attribution or compensation. In the United States and several other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine. However, there is a caveat: If the model produces output that is similar to copyrighted data, particularly in scenarios that affect the market of that data, fair use may no longer apply to the output of the model. In this work, we emphasize that fair use is not guaranteed, and additional work may be necessary to keep model development and deployment squarely in the realm of fair use. First, we survey the potential risks of developing and deploying foundation models based on copyrighted content. We review relevant U.S. case law, drawing parallels to existing and potential applications for generating text, source code, and visual art. Experiments confirm that popular foundation models can generate content considerably similar to copyrighted material. Second, we discuss technical mitigations that can help foundation models stay in line with fair use. We argue that more research is needed to align mitigation strategies with the current state of the law. Lastly, we suggest that the law and technical mitigations should co-evolve. For example, coupled with other policy mechanisms, the law could more explicitly consider safe harbors when strong technical tools are used to mitigate infringement harms. This co-evolution may help strike a balance between intellectual property and innovation, which speaks to the original goal of fair use. But we emphasize that the strategies we describe here are not a panacea and more work is needed to develop policies that address the potential harms of foundation models

    Trusted and Privacy-preserving Embedded Systems: Advances in Design, Analysis and Application of Lightweight Privacy-preserving Authentication and Physical Security Primitives

    Get PDF
    Radio Frequency Identification (RFID) enables RFID readers to perform fully automatic wireless identification of objects labeled with RFID tags and is widely deployed to many applications, such as access control, electronic tickets and payment as well as electronic passports. This prevalence of RFID technology introduces various risks, in particular concerning the privacy of its users and holders. Despite the privacy risk, classical threats to authentication and identification systems must be considered to prevent the adversary from impersonating or copying (cloning) a tag. This thesis summarizes the state of the art in secure and privacy-preserving authentication for RFID tags with a particular focus on solutions based on Physically Unclonable Functions (PUFs). It presents advancements in the design, analysis and evaluation of secure and privacy-preserving authentication protocols for RFID systems and PUFs. Formalizing the security and privacy requirements on RFID systems is essential for the design of provably secure and privacy-preserving RFID protocols. However, existing RFID security and privacy models in the literature are often incomparable and in part do not reflect the capabilities of real-world adversaries. We investigate subtle issues such as tag corruption aspects that lead to the impossibility of achieving both mutual authentication and any reasonable notion of privacy in one of the most comprehensive security and privacy models, which is the basis of many subsequent works. Our results led to the refinement of this privacy model and were considered in subsequent works on privacy-preserving RFID systems. A promising approach to enhance the privacy in RFID systems without lifting the computational requirements on the tags are anonymizers. These are special devices that take off the computational workload from the tags. While existing anonymizer-based protocols are subject to impersonation and denial-of-service attacks, existing RFID security and privacy models do not include anonymizers. We present the first security and privacy framework for anonymizer-enabled RFID systems and two privacy-preserving RFID authentication schemes using anonymizers. Both schemes achieve several appealing features that were not simultaneously achieved by any previous proposal. The first protocol is very efficient for all involved entities, achieves privacy under tag corruption. It is secure against impersonation attacks and forgeries even if the adversary can corrupt the anonymizers. The second scheme provides for the first time anonymity and untraceability of tags against readers as well as secure tag authentication against collisions of malicious readers and anonymizers using tags that cannot perform public-key cryptography (i.e., modular exponentiations). The RFID tags commonly used in practice are cost-efficient tokens without expensive hardware protection mechanisms. Physically Unclonable Functions (PUFs) promise to provide an effective security mechanism for RFID tags to protect against basic hardware attacks. However, existing PUF-based RFID authentication schemes are not scalable, allow only for a limited number of authentications and are subject to replay, denial-of-service and emulation attacks. We present two scalable PUF-based authentication schemes that overcome these problems. The first protocol supports tag and reader authentication, is resistant to emulation attacks and highly scalable. The second protocol uses a PUF-based key storage and addresses an open question on the feasibility of destructive privacy, i.e., the privacy of tags that are destroyed during tag corruption. The security of PUFs relies on assumptions on physical properties and is still under investigation. PUF evaluation results in the literature are difficult to compare due to varying test conditions and different analysis methods. We present the first large-scale security analysis of ASIC implementations of the five most popular electronic PUF types, including Arbiter, Ring Oscillator, SRAM, Flip-Flop and Latch PUFs. We present a new PUF evaluation methodology that allows a more precise assessment of the unpredictability properties than previous approaches and we quantify the most important properties of PUFs for their use in cryptographic schemes. PUFs have been proposed for various applications, including anti-counterfeiting and authentication schemes. However, only rudimentary PUF security models exist, limiting the confidence in the security claims of PUF-based security mechanisms. We present a formal security framework for PUF-based primitives, which has been used in subsequent works to capture the properties of image-based PUFs and in the design of anti-counterfeiting mechanisms and physical hash functions

    Proceedings of The Multi-Agent Logics, Languages, and Organisations Federated Workshops (MALLOW 2010)

    Get PDF
    http://ceur-ws.org/Vol-627/allproceedings.pdfInternational audienceMALLOW-2010 is a third edition of a series initiated in 2007 in Durham, and pursued in 2009 in Turin. The objective, as initially stated, is to "provide a venue where: the cost of participation was minimum; participants were able to attend various workshops, so fostering collaboration and cross-fertilization; there was a friendly atmosphere and plenty of time for networking, by maximizing the time participants spent together"

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Almost Tight Multi-User Security under Adaptive Corruptions & Leakages in the Standard Model

    Get PDF
    In this paper, we consider tight multi-user security under adaptive corruptions, where the adversary can adaptively corrupt some users and obtain their secret keys. We propose generic constructions for a bunch of primitives, and the instantiations from the matrix decision Diffie-Hellman (MDDH) assumptions yield the following schemes: (1) the first digital signature (SIG) scheme achieving almost tight strong EUF-CMA security in the multi-user setting with adaptive corruptions in the standard model; (2) the first public-key encryption (PKE) scheme achieving almost tight IND-CCA security in the multi-user multi-challenge setting with adaptive corruptions in the standard model; (3) the first signcryption (SC) scheme achieving almost tight privacy and authenticity under CCA attacks in the multi-user multi-challenge setting with adaptive corruptions in the standard model. As byproducts, our SIG and SC naturally derive the first strongly secure message authentication code (MAC) and the first authenticated encryption (AE) schemes achieving almost tight multi-user security under adaptive corruptions in the standard model. We further optimize constructions of SC, MAC and AE to admit better efficiency. Furthermore, we consider key leakages besides corruptions, as a natural strengthening of tight multi-user security under adaptive corruptions. This security considers a more natural and more complete all-or-part-or-nothing setting, where secret keys of users are either fully exposed to adversary ( all ), or completely hidden to adversary ( nothing ), or partially leaked to adversary ( part ), and it protects the uncorrupted users even with bounded key leakages. All our schemes additionally support bounded key leakages and enjoy full compactness. This yields the first SIG, PKE, SC, MAC, AE schemes achieving almost tight multi-user security under both adaptive corruptions and leakages

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Stepping Beyond the Newtonian Paradigm in Biology. Towards an Integrable Model of Life: Accelerating Discovery in the Biological Foundations of Science

    Get PDF
    The INBIOSA project brings together a group of experts across many disciplines who believe that science requires a revolutionary transformative step in order to address many of the vexing challenges presented by the world. It is INBIOSA’s purpose to enable the focused collaboration of an interdisciplinary community of original thinkers. This paper sets out the case for support for this effort. The focus of the transformative research program proposal is biology-centric. We admit that biology to date has been more fact-oriented and less theoretical than physics. However, the key leverageable idea is that careful extension of the science of living systems can be more effectively applied to some of our most vexing modern problems than the prevailing scheme, derived from abstractions in physics. While these have some universal application and demonstrate computational advantages, they are not theoretically mandated for the living. A new set of mathematical abstractions derived from biology can now be similarly extended. This is made possible by leveraging new formal tools to understand abstraction and enable computability. [The latter has a much expanded meaning in our context from the one known and used in computer science and biology today, that is "by rote algorithmic means", since it is not known if a living system is computable in this sense (Mossio et al., 2009).] Two major challenges constitute the effort. The first challenge is to design an original general system of abstractions within the biological domain. The initial issue is descriptive leading to the explanatory. There has not yet been a serious formal examination of the abstractions of the biological domain. What is used today is an amalgam; much is inherited from physics (via the bridging abstractions of chemistry) and there are many new abstractions from advances in mathematics (incentivized by the need for more capable computational analyses). Interspersed are abstractions, concepts and underlying assumptions “native” to biology and distinct from the mechanical language of physics and computation as we know them. A pressing agenda should be to single out the most concrete and at the same time the most fundamental process-units in biology and to recruit them into the descriptive domain. Therefore, the first challenge is to build a coherent formal system of abstractions and operations that is truly native to living systems. Nothing will be thrown away, but many common methods will be philosophically recast, just as in physics relativity subsumed and reinterpreted Newtonian mechanics. This step is required because we need a comprehensible, formal system to apply in many domains. Emphasis should be placed on the distinction between multi-perspective analysis and synthesis and on what could be the basic terms or tools needed. The second challenge is relatively simple: the actual application of this set of biology-centric ways and means to cross-disciplinary problems. In its early stages, this will seem to be a “new science”. This White Paper sets out the case of continuing support of Information and Communication Technology (ICT) for transformative research in biology and information processing centered on paradigm changes in the epistemological, ontological, mathematical and computational bases of the science of living systems. Today, curiously, living systems cannot be said to be anything more than dissipative structures organized internally by genetic information. There is not anything substantially different from abiotic systems other than the empirical nature of their robustness. We believe that there are other new and unique properties and patterns comprehensible at this bio-logical level. The report lays out a fundamental set of approaches to articulate these properties and patterns, and is composed as follows. Sections 1 through 4 (preamble, introduction, motivation and major biomathematical problems) are incipient. Section 5 describes the issues affecting Integral Biomathics and Section 6 -- the aspects of the Grand Challenge we face with this project. Section 7 contemplates the effort to formalize a General Theory of Living Systems (GTLS) from what we have today. The goal is to have a formal system, equivalent to that which exists in the physics community. Here we define how to perceive the role of time in biology. Section 8 describes the initial efforts to apply this general theory of living systems in many domains, with special emphasis on crossdisciplinary problems and multiple domains spanning both “hard” and “soft” sciences. The expected result is a coherent collection of integrated mathematical techniques. Section 9 discusses the first two test cases, project proposals, of our approach. They are designed to demonstrate the ability of our approach to address “wicked problems” which span across physics, chemistry, biology, societies and societal dynamics. The solutions require integrated measurable results at multiple levels known as “grand challenges” to existing methods. Finally, Section 10 adheres to an appeal for action, advocating the necessity for further long-term support of the INBIOSA program. The report is concluded with preliminary non-exclusive list of challenging research themes to address, as well as required administrative actions. The efforts described in the ten sections of this White Paper will proceed concurrently. Collectively, they describe a program that can be managed and measured as it progresses
    corecore