7,966 research outputs found

    ANCHOR: logically-centralized security for Software-Defined Networks

    Get PDF
    While the centralization of SDN brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against different threats. The literature on SDN has mostly been concerned with the functional side, despite some specific works concerning non-functional properties like 'security' or 'dependability'. Though addressing the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to efficiency and effectiveness problems. We claim that the enforcement of non-functional properties as a pillar of SDN robustness calls for a systemic approach. As a general concept, we propose ANCHOR, a subsystem architecture that promotes the logical centralization of non-functional properties. To show the effectiveness of the concept, we focus on 'security' in this paper: we identify the current security gaps in SDNs and we populate the architecture middleware with the appropriate security mechanisms, in a global and consistent manner. Essential security mechanisms provided by anchor include reliable entropy and resilient pseudo-random generators, and protocols for secure registration and association of SDN devices. We claim and justify in the paper that centralizing such mechanisms is key for their effectiveness, by allowing us to: define and enforce global policies for those properties; reduce the complexity of controllers and forwarding devices; ensure higher levels of robustness for critical services; foster interoperability of the non-functional property enforcement mechanisms; and promote the security and resilience of the architecture itself. We discuss design and implementation aspects, and we prove and evaluate our algorithms and mechanisms, including the formalisation of the main protocols and the verification of their core security properties using the Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference

    Automated Cryptographic Analysis of the Pedersen Commitment Scheme

    Full text link
    Aiming for strong security assurance, recently there has been an increasing interest in formal verification of cryptographic constructions. This paper presents a mechanised formal verification of the popular Pedersen commitment protocol, proving its security properties of correctness, perfect hiding, and computational binding. To formally verify the protocol, we extended the theory of EasyCrypt, a framework which allows for reasoning in the computational model, to support the discrete logarithm and an abstraction of commitment protocols. Commitments are building blocks of many cryptographic constructions, for example, verifiable secret sharing, zero-knowledge proofs, and e-voting. Our work paves the way for the verification of those more complex constructions.Comment: 12 pages, conference MMM-ACNS 201

    Revisiting Deniability in Quantum Key Exchange via Covert Communication and Entanglement Distillation

    Full text link
    We revisit the notion of deniability in quantum key exchange (QKE), a topic that remains largely unexplored. In the only work on this subject by Donald Beaver, it is argued that QKE is not necessarily deniable due to an eavesdropping attack that limits key equivocation. We provide more insight into the nature of this attack and how it extends to other constructions such as QKE obtained from uncloneable encryption. We then adopt the framework for quantum authenticated key exchange, developed by Mosca et al., and extend it to introduce the notion of coercer-deniable QKE, formalized in terms of the indistinguishability of real and fake coercer views. Next, we apply results from a recent work by Arrazola and Scarani on covert quantum communication to establish a connection between covert QKE and deniability. We propose DC-QKE, a simple deniable covert QKE protocol, and prove its deniability via a reduction to the security of covert QKE. Finally, we consider how entanglement distillation can be used to enable information-theoretically deniable protocols for QKE and tasks beyond key exchange.Comment: 16 pages, published in the proceedings of NordSec 201

    A Logic for Constraint-based Security Protocol Analysis

    Get PDF
    We propose PS-LTL, a pure-past security linear temporal logic that allows the specification of a variety of authentication, secrecy and data freshness properties. Furthermore, we present a sound and complete decision procedure to establish the validity of security properties for symbolic execution traces, and show the integration with constraint-based analysis techniques

    Automatic analysis of distance bounding protocols

    Full text link
    Distance bounding protocols are used by nodes in wireless networks to calculate upper bounds on their distances to other nodes. However, dishonest nodes in the network can turn the calculations both illegitimate and inaccurate when they participate in protocol executions. It is important to analyze protocols for the possibility of such violations. Past efforts to analyze distance bounding protocols have only been manual. However, automated approaches are important since they are quite likely to find flaws that manual approaches cannot, as witnessed in literature for analysis pertaining to key establishment protocols. In this paper, we use the constraint solver tool to automatically analyze distance bounding protocols. We first formulate a new trace property called Secure Distance Bounding (SDB) that protocol executions must satisfy. We then classify the scenarios in which these protocols can operate considering the (dis)honesty of nodes and location of the attacker in the network. Finally, we extend the constraint solver so that it can be used to test protocols for violations of SDB in these scenarios and illustrate our technique on some published protocols.Comment: 22 pages, Appeared in Foundations of Computer Security, (Affiliated workshop of LICS 2009, Los Angeles, CA)

    New security notions and feasibility results for authentication of quantum data

    Get PDF
    We give a new class of security definitions for authentication in the quantum setting. These definitions capture and strengthen existing definitions of security against quantum adversaries for both classical message authentication codes (MACs) and well as full quantum state authentication schemes. The main feature of our definitions is that they precisely characterize the effective behavior of any adversary when the authentication protocol accepts, including correlations with the key. Our definitions readily yield a host of desirable properties and interesting consequences; for example, our security definition for full quantum state authentication implies that the entire secret key can be re-used if the authentication protocol succeeds. Next, we present several protocols satisfying our security definitions. We show that the classical Wegman-Carter authentication scheme with 3-universal hashing is secure against superposition attacks, as well as adversaries with quantum side information. We then present conceptually simple constructions of full quantum state authentication. Finally, we prove a lifting theorem which shows that, as long as a protocol can securely authenticate the maximally entangled state, it can securely authenticate any state, even those that are entangled with the adversary. Thus, this shows that protocols satisfying a fairly weak form of authentication security automatically satisfy a stronger notion of security (in particular, the definition of Dupuis, et al (2012)).Comment: 50 pages, QCrypt 2016 - 6th International Conference on Quantum Cryptography, added a new lifting theorem that shows equivalence between a weak form of authentication security and a stronger notion that considers side informatio
    • ā€¦
    corecore