1,954 research outputs found

    A formal definition and a new security mechanism of physical unclonable functions

    Full text link
    The characteristic novelty of what is generally meant by a "physical unclonable function" (PUF) is precisely defined, in order to supply a firm basis for security evaluations and the proposal of new security mechanisms. A PUF is defined as a hardware device which implements a physical function with an output value that changes with its argument. A PUF can be clonable, but a secure PUF must be unclonable. This proposed meaning of a PUF is cleanly delineated from the closely related concepts of "conventional unclonable function", "physically obfuscated key", "random-number generator", "controlled PUF" and "strong PUF". The structure of a systematic security evaluation of a PUF enabled by the proposed formal definition is outlined. Practically all current and novel physical (but not conventional) unclonable physical functions are PUFs by our definition. Thereby the proposed definition captures the existing intuition about what is a PUF and remains flexible enough to encompass further research. In a second part we quantitatively characterize two classes of PUF security mechanisms, the standard one, based on a minimum secret read-out time, and a novel one, based on challenge-dependent erasure of stored information. The new mechanism is shown to allow in principle the construction of a "quantum-PUF", that is absolutely secure while not requiring the storage of an exponentially large secret. The construction of a PUF that is mathematically and physically unclonable in principle does not contradict the laws of physics.Comment: 13 pages, 1 figure, Conference Proceedings MMB & DFT 2012, Kaiserslautern, German

    Memorable And Secure: How Do You Choose Your PIN?

    Get PDF
    Managing all your PINs is difficult. Banks acknowledge this by allowing and facilitating PIN changes. However, choosing secure PINs is a difficult task for humans as they are incapable of consciously generating randomness. This leads to certain PINs being chosen more frequently than others, which in turn increases the danger of someone else guessing correctly. We investigate different methods of supporting PIN changes and report on an evaluation of these methods in a study with 152 participants. Our contribution is twofold: We introduce an alternative to system-generated random PINs, which considers people’s preferred memorisation strategy, and, secondly, we provide indication that presenting guidance on how to avoid insecure PINs does indeed nudge people towards more secure PIN choices when they are in the process of changing their PINs

    Gaming security by obscurity

    Get PDF
    Shannon sought security against the attacker with unlimited computational powers: *if an information source conveys some information, then Shannon's attacker will surely extract that information*. Diffie and Hellman refined Shannon's attacker model by taking into account the fact that the real attackers are computationally limited. This idea became one of the greatest new paradigms in computer science, and led to modern cryptography. Shannon also sought security against the attacker with unlimited logical and observational powers, expressed through the maxim that "the enemy knows the system". This view is still endorsed in cryptography. The popular formulation, going back to Kerckhoffs, is that "there is no security by obscurity", meaning that the algorithms cannot be kept obscured from the attacker, and that security should only rely upon the secret keys. In fact, modern cryptography goes even further than Shannon or Kerckhoffs in tacitly assuming that *if there is an algorithm that can break the system, then the attacker will surely find that algorithm*. The attacker is not viewed as an omnipotent computer any more, but he is still construed as an omnipotent programmer. So the Diffie-Hellman step from unlimited to limited computational powers has not been extended into a step from unlimited to limited logical or programming powers. Is the assumption that all feasible algorithms will eventually be discovered and implemented really different from the assumption that everything that is computable will eventually be computed? The present paper explores some ways to refine the current models of the attacker, and of the defender, by taking into account their limited logical and programming powers. If the adaptive attacker actively queries the system to seek out its vulnerabilities, can the system gain some security by actively learning attacker's methods, and adapting to them?Comment: 15 pages, 9 figures, 2 tables; final version appeared in the Proceedings of New Security Paradigms Workshop 2011 (ACM 2011); typos correcte

    Privacy-Aware Processing of Biometric Templates by Means of Secure Two-Party Computation

    Get PDF
    The use of biometric data for person identification and access control is gaining more and more popularity. Handling biometric data, however, requires particular care, since biometric data is indissolubly tied to the identity of the owner hence raising important security and privacy issues. This chapter focuses on the latter, presenting an innovative approach that, by relying on tools borrowed from Secure Two Party Computation (STPC) theory, permits to process the biometric data in encrypted form, thus eliminating any risk that private biometric information is leaked during an identification process. The basic concepts behind STPC are reviewed together with the basic cryptographic primitives needed to achieve privacy-aware processing of biometric data in a STPC context. The two main approaches proposed so far, namely homomorphic encryption and garbled circuits, are discussed and the way such techniques can be used to develop a full biometric matching protocol described. Some general guidelines to be used in the design of a privacy-aware biometric system are given, so as to allow the reader to choose the most appropriate tools depending on the application at hand

    Actor-network procedures: Modeling multi-factor authentication, device pairing, social interactions

    Full text link
    As computation spreads from computers to networks of computers, and migrates into cyberspace, it ceases to be globally programmable, but it remains programmable indirectly: network computations cannot be controlled, but they can be steered by local constraints on network nodes. The tasks of "programming" global behaviors through local constraints belong to the area of security. The "program particles" that assure that a system of local interactions leads towards some desired global goals are called security protocols. As computation spreads beyond cyberspace, into physical and social spaces, new security tasks and problems arise. As networks are extended by physical sensors and controllers, including the humans, and interlaced with social networks, the engineering concepts and techniques of computer security blend with the social processes of security. These new connectors for computational and social software require a new "discipline of programming" of global behaviors through local constraints. Since the new discipline seems to be emerging from a combination of established models of security protocols with older methods of procedural programming, we use the name procedures for these new connectors, that generalize protocols. In the present paper we propose actor-networks as a formal model of computation in heterogenous networks of computers, humans and their devices; and we introduce Procedure Derivation Logic (PDL) as a framework for reasoning about security in actor-networks. On the way, we survey the guiding ideas of Protocol Derivation Logic (also PDL) that evolved through our work in security in last 10 years. Both formalisms are geared towards graphic reasoning and tool support. We illustrate their workings by analysing a popular form of two-factor authentication, and a multi-channel device pairing procedure, devised for this occasion.Comment: 32 pages, 12 figures, 3 tables; journal submission; extended references, added discussio
    • …
    corecore