372,713 research outputs found

    Security Analysis of the Consumer Remote SIM Provisioning Protocol

    Full text link
    Remote SIM provisioning (RSP) for consumer devices is the protocol specified by the GSM Association for downloading SIM profiles into a secure element in a mobile device. The process is commonly known as eSIM, and it is expected to replace removable SIM cards. The security of the protocol is critical because the profile includes the credentials with which the mobile device will authenticate to the mobile network. In this paper, we present a formal security analysis of the consumer RSP protocol. We model the multi-party protocol in applied pi calculus, define formal security goals, and verify them in ProVerif. The analysis shows that the consumer RSP protocol protects against a network adversary when all the intended participants are honest. However, we also model the protocol in realistic partial compromise scenarios where the adversary controls a legitimate participant or communication channel. The security failures in the partial compromise scenarios reveal weaknesses in the protocol design. The most important observation is that the security of RSP depends unnecessarily on it being encapsulated in a TLS tunnel. Also, the lack of pre-established identifiers means that a compromised download server anywhere in the world or a compromised secure element can be used for attacks against RSP between honest participants. Additionally, the lack of reliable methods for verifying user intent can lead to serious security failures. Based on the findings, we recommend practical improvements to RSP implementations, to future versions of the specification, and to mobile operator processes to increase the robustness of eSIM security.Comment: 33 pages, 8 figures, Associated ProVerif model files located at https://github.com/peltona/rsp_mode

    How to prove security of communication protocols? A discussion on the soundness of formal models w.r.t. computational ones.

    Get PDF
    Security protocols are short programs that aim at securing communication over a public network. Their design is known to be error-prone with flaws found years later. That is why they deserve a careful security analysis, with rigorous proofs. Two main lines of research have been (independently) developed to analyse the security of protocols. On the one hand, formal methods provide with symbolic models and often automatic proofs. On the other hand, cryptographic models propose a tighter modeling but proofs are more difficult to write and to check. An approach developed during the last decade consists in bridging the two approaches, showing that symbolic models are sound w.r.t. symbolic ones, yielding strong security guarantees using automatic tools. These results have been developed for several cryptographic primitives (e.g. symmetric and asymmetric encryption, signatures, hash) and security properties. While proving soundness of symbolic models is a very promising approach, several technical details are often not satisfactory. Focusing on symmetric encryption, we describe the difficulties and limitations of the available results

    A Method for Patching Interleaving-Replay Attacks in Faulty Security Protocols

    Get PDF
    AbstractThe verification of security protocols has attracted a lot of interest in the formal methods community, yielding two main verification approaches: i) state exploration, e.g. FDR [Gavin Lowe. Breaking and fixing the needham-schroeder public-key protocol using FDR. In TACAs'96: Proceedings of the Second International Workshop on Tools and Algorithms for Construction and Analysis of Systems, pages 147–166, London, UK, 1996. Springer-Verlag] and OFMC [A.D. Basin, S. Mödersheim, and L. Viganò. An on-the-fly model-checker for security protocol analysis. In D. Gollmann and E. Snekkenes, editors, ESORICS'03: 8th European Symposium on Research in Computer Security, number 2808 in Lecture Notes in Computer Science, pages 253–270, Gjøvik, Norway, 2003. Springer-Verlag]; and ii) theorem proving, e.g. the Isabelle inductive method [Lawrence C. Paulson. The inductive approach to verifying cryptographic protocols. Journal in Computer Security, 6(1-2):85–128, 1998] and Coral [G. Steel, A. Bundy, and M. Maidl. Attacking the asokan-ginzboorg protocol for key distribution in an ad-hoc bluetooth network using coral. In H. König, M. Heiner, and A. Wolisz, editors, IFIP TC6 /WG 6.1: Proceedings of 23rd IFIP International Conference on Formal Techniques for Networked and Distributed Systems, volume 2767, pages 1–10, Berlin, Germany, 2003. FORTE 2003 (work in progress papers)]. Complementing formal methods, Abadi and Needham's principles aim to guide the design of security protocols in order to make them simple and, hopefully, correct [M. Abadi and R. Needham. Prudent engineering practice for cryptographic protocols. IEEE Transactions on Software Engineering, 22(1):6–15, 1996]. We are interested in a problem related to verification but far less explored: the correction of faulty security protocols. Experience has shown that the analysis of counterexamples or failed proof attempts often holds the key to the completion of proofs and for the correction of a faulty model. In this paper, we introduce a method for patching faulty security protocols that are susceptible to an interleaving-replay attack. Our method makes use of Abadi and Needham's principles for the prudent engineering practice for cryptographic protocols in order to guide the location of the fault in a protocol as well as the proposition of candidate patches. We have run a test on our method with encouraging results. The test set includes 21 faulty security protocols borrowed from the Clark-Jacob library [J. Clark and J. Jacob. A survey of authentication protocol literature: Version 1.0. Technical report, Department of Computer Science, University of York, November 1997. A complete specification of the Clark-Jacob library in CAPSL is available at http://www.cs.sri.com/millen/capsl/]

    New Techniques to Improve Network Security

    Get PDF
    With current technologies it is practically impossible to claim that a distributed application is safe from potential malicious attacks. Vulnerabilities may lay at several levels (criptographic weaknesses, protocol design flaws, coding bugs both in the application and in the host operating system itself, to name a few) and can be extremely hard to find. Moreover, sometimes an attacker does not even need to find a software vulnerability, as authentication credentials might simply “leak” ouside from the network for several reasons. Luckily, literature proposes several approaches that can contain these problems and enforce security, but the applicability of these techniques is often greatly limited due to the high level of expertise required, or simply because of the cost of the required specialized hardware. Aim of this thesis is to focus on two security enforcment techniques, namely formal methods and data analysis, and to present some improvements to the state of the art enabling to reduce both the required expertise and the necessity of specialized hardware

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
    • …
    corecore