6 research outputs found

    Computational soundness for standard assumptions of formal cryptography

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 95-100).This implementation is conceptually simple, and relies only on general assumptions. Specifically, it can be thought of as a 'self-referential' variation on a well-known encryption scheme. 4. Lastly, we show how the computational soundness of the Dolev-Yao model can be maintained even as it is extended to include new operators. In particular, we show how the Diffie-Hellman key-agreement scheme and the computational Diffie-Hellman assumption can be added to the Dolev-Yao model in a computationally sound way.The Dolev-Yao model is a useful and well-known framework in which to analyze security protocols. However, it models the messages of the protocol at a very high level and makes extremely strong assumptions about the power of the adversary. The computational model of cryptography, on the other hand, takes a much lower-level view of messages and uses much weaker assumptions. Despite the large differences between these two models, we have been able to show that there exists a relationship between them. Previous results of ours demonstrate that certain kinds of computational cryptography can result in an equivalence of sorts between the formal and computational adversary. Specifically: * We gave an interpretation to the messages of the Dolev-Yao model in terms of computational cryptography, * We defined a computational security condition, called weak Dolev-Yao non-malleability, that translates the main assumptions of the Dolev-Yao model into the computational setting, and * We demonstrated that this condition is satisfied by a standard definition of computational encryption security called plaintext awareness. In this work, we consider this result and strengthen it in four ways: 1. Firstly, we propose a stronger definition of Dolev-Yao non-malleability which ensures security against a more adaptive adversary. 2. Secondly, the definition of plaintext awareness is considered suspect because it relies on a trusted third party called the random oracle. Thus, we show that our new notion of Dolev-Yao non-malleability is satisfied by a weaker and less troublesome definition for computational encryption called chosen-ciphertext security. 3. Thirdly, we propose a new definition of plaintext-awareness that does not use random oracles, and an implementation.by Jonathan Herzog.Ph.D

    Non-malleable public key encryption in BRSIM/UC

    Get PDF
    We propose an extension to the BRSIM/UC library of Backes, Pfitzmann and Waidner [1] with non-malleable public key encryption. We also investigate the requirement of “full randomization” of public key encryption primitives in [1], and show that additional randomization to attain word uniqueness is theoretically not justified

    On symbolic analysis of cryptographic protocols

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 91-94).The universally composable symbolic analysis (UCSA) framework layers Dolev-Yao style symbolic analysis on top of the universally composable (UC) secure framework to construct computationally sound proofs of cryptographic protocol security. The original proposal of the UCSA framework by Canetti and Herzog (2004) focused on protocols that only use public key encryption to achieve 2-party mutual authentication or key exchange. This thesis expands the framework to include protocols that use digital signatures as well. In the process of expanding the framework, we identify a flaw in the framework's use of UC ideal functionality FKE. We also identify issues that arise when combining FKE with the current formulation of ideal signature functionality FSI,. Motivated by these discoveries, we redefine the FPKE and FsIG functionalities appropriately.by Akshay Patil.M.Eng

    Automated Analysis of Security in Networking Systems

    Get PDF

    Maintaining secrecy when information leakage is unavoidable

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 109-115).(cont.) We apply the framework to get new results, creating (a) encryption schemes with very short keys, and (b) hash functions that leak no information about their input, yet-paradoxically-allow testing if a candidate vector is close to the input. One of the technical contributions of this research is to provide new, cryptographic uses of mathematical tools from complexity theory known as randomness extractors.Sharing and maintaining long, random keys is one of the central problems in cryptography. This thesis provides about ensuring the security of a cryptographic key when partial information about it has been, or must be, leaked to an adversary. We consider two basic approaches: 1. Extracting a new, shorter, secret key from one that has been partially compromised. Specifically, we study the use of noisy data, such as biometrics and personal information, as cryptographic keys. Such data can vary drastically from one measurement to the next. We would like to store enough information to handle these variations, without having to rely on any secure storage-in particular, without storing the key itself in the clear. We solve the problem by casting it in terms of key extraction. We give a precise definition of what "security" should mean in this setting, and design practical, general solutions with rigorous analyses. Prior to this work, no solutions were known with satisfactory provable security guarantees. 2. Ensuring that whatever is revealed is not actually useful. This is most relevant when the key itself is sensitive-for example when it is based on a person's iris scan or Social Security Number. This second approach requires the user to have some control over exactly what information is revealed, but this is often the case: for example, if the user must reveal enough information to allow another user to correct errors in a corrupted key. How can the user ensure that whatever information the adversary learns is not useful to her? We answer by developing a theoretical framework for separating leaked information from useful information. Our definition strengthens the notion of entropic security, considered before in a few different contexts.by Adam Davison Smith.Ph.D

    Hierarchical and compositional verification of cryptographic protocols

    Get PDF
    Nella verifica dei protocolli di sicurezza ci sono due importanti approcci che sono conosciuti sotto il nome di approccio simbolico e computazionale, rispettivamente. Nell'approccio simbolico i messaggi sono termini di un'algebra e le primitive crittografiche sono idealmente sicure; nell'approccio computazionale i messaggi sono sequenze di bit e le primitive crittografiche sono sicure con elevata probabilit\ue0. Questo significa, per esempio, che nell'approccio simbolico solo chi conosce la chiave di decifratura pu\uf2 decifrare un messaggio cifrato, mentre nell'approccio computazionale la probabilit\ue0 di decifrare un testo cifrato senza conoscere la chiave di decifratura \ue8 trascurabile. Di solito, i protocolli crittografici sono il risultato dell'interazione di molte componenti: alcune sono basate su primitive crittografiche, altre su altri principi. In generale, quello che risulta \ue8 un sistema complesso che vorremmo poter analizzare in modo modulare invece che doverlo studiare come un singolo sistema. Una situazione simile pu\uf2 essere trovata nel contesto dei sistemi distribuiti, dove ci sono molti componenti probabilistici che interagiscono tra loro implementando un algoritmo distribuito. In questo contesto l'analisi della correttezza di un sistema complesso \ue8 molto rigorosa ed \ue8 basata su strumenti che derivano dalla teoria dell'informazione, strumenti come il metodo di simulazione che permette di decomporre grossi problemi in problemi pi\uf9 piccoli e di verificare i sistemi in modo gerarchico e composizionale. Il metodo di simulazione consiste nello stabilire delle relazioni tra gli stati di due automi, chiamate relazioni di simulazione, e nel verificare che tali relazioni soddisfano delle condizioni di passo appropriate, come che ogni transizione del sistema simulato pu\uf2 essere imitata dal sistema simulante nel rispetto della relazione data. Usando un approccio composizionale possiamo studiare le propriet\ue0 di ogni singolo sotto-problema indipendentemente dagli altri per poi derivare le propriet\ue0 del sistema complessivo. Inoltre, la verifica gerarchica ci permette di definire molti raffinamenti intermedi tra la specifica e l'implementazione. Spesso la verifica gerarchica e composizionale \ue8 pi\uf9 semplice e chiara che l'intera verifica fatta in una volta sola. In questa tesi introduciamo una nuova relazione di simulazione, che chiamiamo simulazione polinomialmente accurata o simulazione approssimata, che \ue8 composizionale e che permette di usare l\u2019approccio gerarchico nelle nostre analisi. Le simulazioni polinomialmente accurate estendono le relazioni di simulazione definite nel contesto dei sistemi distribuiti sia nel caso forte sia in quello debole tenendo conto delle lunghezze delle esecuzioni e delle propriet\ue0 computazionali delle primitive crittografiche. Oltre alle simulazioni polinomialmente accurate, forniamo altri strumenti che possono semplificare l\u2019analisi dei protocolli crittografici: il primo \ue8 il concetto di automa condizionale che permette di rimuovere eventi che occorrono con probabilit\ue0 trascurabile in modo sicuro. Data una macchina che \ue8 attaccabile con probabilit\ue0 trascurabile, se costruiamo un automa che \ue8 condizionale all'assenza di questi attacchi, allora esiste una simulazione tra i due. Questo ci permette, tra l'altro, di lavorare con le relazioni di simulazione tutto il tempo e in particolare possiamo anche dimostrare in modo composizionale che l'eliminazione di eventi trascurabili \ue8 sicura. Questa propriet\ue0 \ue8 giustificata dal teorema dell\u2019automa condizionale che afferma che gli eventi sono trascurabili se e solo se la relazione identit\ue0 \ue8 una simulazione approssimata dall\u2019automa alla sua controparte condizionale. Un altro strumento \ue8 il teorema della corrispondenza delle esecuzioni, che estende quello del contesto dei sistemi distribuiti, che giustifica l\u2019approccio gerarchico. Infatti, il teorema afferma che se abbiamo molti automi e una catena di simulazioni tra di essi, allora con elevata probabilit\ue0 ogni esecuzione del primo automa della catena \ue8 in relazione con un\u2019esecuzione dell'ultimo automa della catena. In altre parole, abbiamo che la probabilit\ue0 che l'ultimo automa non sia in grado di simulare un\u2019esecuzione del primo \ue8 trascurabile. Infine, usiamo il framework delle simulazioni polinomialmente accurate per fornire delle famiglie di automi che implementano le primitive crittografiche comunemente usate e per dimostrare che l'approccio simbolico \ue8 corretto rispetto all\u2019approccio computazionale.Two important approaches to the verification of security protocols are known under the general names of symbolic and computational, respectively. In the symbolic approach messages are terms of an algebra and the cryptographic primitives are ideally secure; in the computational approach messages are bitstrings and the cryptographic primitives are secure with overwhelming probability. This means, for example, that in the symbolic approach only who knows the decryption key can decrypt a ciphertext, while in the computational approach the probability to decrypt a ciphertext without knowing the decryption key is negligible. Usually, the cryptographic protocols are the outcome of the interaction of several components: some of them are based on cryptographic primitives, other components on other principles. In general, the result is a complex system that we would like to analyse in a modular way instead of studying it as a single system. A similar situation can be found in the context of distributed systems, where there are several probabilistic components that interact with each other implementing a distributed algorithm. In this context, the analysis of the correctness of a complex system is very rigorous and it is based on tools from information theory such as the simulation method that allows us to decompose large problems into smaller problems and to verify systems hierarchically and compositionally. The simulation method consists of establishing relations between the states of two automata, called simulation relations, and to verify that such relations satisfy appropriate step conditions: each transition of the simulated system can be matched by the simulating system up to the given relation. Using a compositional approach we can study the properties of each small problem independently from the each other, deriving the properties of the overall system. Furthermore, the hierarchical verification allows us to build several intermediate refinements between specification and implementation. Often hierarchical and compositional verification is simpler and cleaner than direct one-step verification, since each refinement may focus on specific homogeneous aspects of the implementation. In this thesis we introduce a new simulation relation, that we call polynomially accurate simulation, or approximated simulation, that is compositional and that allows us to adopt the hierarchical approach in our analyses. The polynomially accurate simulations extend the simulation relations of the distributed systems context in both strong and weak cases taking into account the lengths of the computations and of the computational properties of the cryptographic primitives. Besides the polynomially accurate simulations, we provide other tools that can simplify the analysis of cryptographic protocols: the first one is the concept of conditional automaton, that permits to safely remove events that occur with negligible probability. Starting from a machine that is attackable with negligible probability, if we build an automaton that is conditional to the absence of these attacks, then there exists a simulation. And this allows us to work with the simulation relations all the time and in particular we can also prove in a compositional way that the elimination of negligible events from an automaton is safe. This property is justified by the conditional automaton theorem that states that events are negligible if and only if the identity relation is an approximated simulation from the automaton and its conditional counterpart. Another tool is the execution correspondence theorem, that extends the one of the distributed systems context, that allows us to use the hierarchical approach. In fact, the theorem states that if we have several automata and a chain of simulations between them, then with overwhelming probability each execution of the first automaton is related to an execution of the last automaton. In other words, we have that the probability that the last automaton is not able to simulate an execution of the first one is negligible. Finally, we use the polynomially accurate simulation framework to provide families of automata that implement commonly used cryptographic primitives and to prove that the symbolic approach is sound with respect to the computational approach
    corecore