16 research outputs found

    Evaluating the indistinguishability of the XTS mode in the proposed security model

    Get PDF
    In this paper, we consider the indistinguishability of XTS in some security models for both full final block and partial final block cases. Firstly, some evaluations of the indistinguishability up-to-block are presented. Then, we present a new security model in which the adversary can not control sector number, based on an ϵ\epsilon-collision resistant function. In this model, we give a bound of the distinguishing advantage that the adversary can get when attacks on XTS. The received results is an extension of \cite{6}

    Space-efficient, byte-wise incremental and perfectly private encryption schemes

    Get PDF
    The problem raised by incremental encryption is the overhead due to the larger storage space required by the provision of random blocks together with the ciphered versions of a given document. Besides, permitting variable-length modifications on the ciphertext leads to privacy preservation issues. In this paper we present incremental encryption schemes which are space-efficient, byte-wise incremental and which preserve perfect privacy in the sense that they hide the fact that an update operation has been performed on a ciphered document. For each scheme, the run time of updates performed turns out to be very efficient and we discuss the statistically adjustable trade-off between computational cost and storage space required by the produced ciphertexts

    A Cryptographic Analysis of the TLS 1.3 Handshake Protocol

    Get PDF
    We analyze the handshake protocol of the Transport Layer Security (TLS) protocol, version 1.3. We address both the full TLS 1.3 handshake (the one round-trip time mode, with signatures for authentication and (elliptic curve) Diffie–Hellman ephemeral ((EC)DHE) key exchange), and the abbreviated resumption/ PSK mode which uses a pre-shared key for authentication (with optional (EC)DHE key exchange and zero round-trip time key establishment). Our analysis in the reductionist security framework uses a multi-stage key exchange security model, where each of the many session keys derived in a single TLS 1.3 handshake is tagged with various properties (such as unauthenticated versus unilaterally authenticated versus mutually authenticated, whether it is intended to provide forward security, how it is used in the protocol, and whether the key is protected against replay attacks). We show that these TLS 1.3 handshake protocol modes establish session keys with their desired security properties under standard cryptographic assumptions

    Modeling Advanced Security Aspects of Key Exchange and Secure Channel Protocols

    Get PDF
    Secure communication has become an essential ingredient of our daily life. Mostly unnoticed, cryptography is protecting our interactions today when we read emails or do banking over the Internet, withdraw cash at an ATM, or chat with friends on our smartphone. Security in such communication is enabled through two components. First, two parties that wish to communicate securely engage in a key exchange protocol in order to establish a shared secret key known only to them. The established key is then used in a follow-up secure channel protocol in order to protect the actual data communicated against eavesdropping or malicious modification on the way. In modern cryptography, security is formalized through abstract mathematical security models which describe the considered class of attacks a cryptographic system is supposed to withstand. Such models enable formal reasoning that no attacker can, in reasonable time, break the security of a system assuming the security of its underlying building blocks or that certain mathematical problems are hard to solve. Given that the assumptions made are valid, security proofs in that sense hence rule out a certain class of attackers with well-defined capabilities. In order for such results to be meaningful for the actually deployed cryptographic systems, it is of utmost importance that security models capture the system's behavior and threats faced in that 'real world' as accurately as possible, yet not be overly demanding in order to still allow for efficient constructions. If a security model fails to capture a realistic attack in practice, such an attack remains viable on a cryptographic system despite a proof of security in that model, at worst voiding the system's overall practical security. In this thesis, we reconsider the established security models for key exchange and secure channel protocols. To this end, we study novel and advanced security aspects that have been introduced in recent designs of some of the most important security protocols deployed, or that escaped a formal treatment so far. We introduce enhanced security models in order to capture these advanced aspects and apply them to analyze the security of major practical key exchange and secure channel protocols, either directly or through comparatively close generic protocol designs. Key exchange protocols have so far always been understood as establishing a single secret key, and then terminating their operation. This changed in recent practical designs, specifically of Google's QUIC ("Quick UDP Internet Connections") protocol and the upcoming version 1.3 of the Transport Layer Security (TLS) protocol, the latter being the de-facto standard for security protocols. Both protocols derive multiple keys in what we formalize in this thesis as a multi-stage key exchange (MSKE) protocol, with the derived keys potentially depending on each other and differing in cryptographic strength. Our MSKE security model allows us to capture such dependencies and differences between all keys established in a single framework. In this thesis, we apply our model to assess the security of both the QUIC and the TLS 1.3 key exchange design. For QUIC, we are able to confirm the intended overall security but at the same time highlight an undesirable dependency between the two keys QUIC derives. For TLS 1.3, we begin by analyzing the main key exchange mode as well as a reduced resumption mode. Our analysis attests that TLS 1.3 achieves strong security for all keys derived without undesired dependencies, in particular confirming several of this new TLS version's design goals. We then also compare the QUIC and TLS 1.3 designs with respect to a novel 'zero round-trip time' key exchange mode establishing an initial key with minimal latency, studying how differences in these designs affect the achievable key exchange security. As this thesis' last contribution in the realm of key exchange, we formalize the notion of key confirmation which ensures one party in a key exchange execution that the other party indeed holds the same key. Despite being frequently mentioned in practical protocol specifications, key confirmation was never comprehensively treated so far. In particular, our formalization exposes an inherent, slight difference in the confirmation guarantees both communication partners can obtain and enables us to analyze the key confirmation properties of TLS 1.3. Secure channels have so far been modeled as protecting a sequence of distinct messages using a single secret key. Our first contribution in the realm of channels originates from the observation that, in practice, secure channel protocols like TLS actually do not allow an application to transmit distinct, or atomic, messages. Instead, they provide applications with a streaming interface to transmit a stream of bits without any inherent demarcation of individual messages. Necessarily, the security guarantees of such an interface differ significantly from those considered in cryptographic models so far. In particular, messages may be fragmented in transport, and the recipient may obtain the sent stream in a different fragmentation, which has in the past led to confusion and practical attacks on major application protocol implementations. In this thesis, we formalize such stream-based channels and introduce corresponding security notions of confidentiality and integrity capturing the inherently increased complexity. We then present a generic construction of a stream-based channel based on authenticated encryption with associated data (AEAD) that achieves the strongest security notions in our model and serves as validation of the similar TLS channel design. We also study the security of such applications whose messages are inherently atomic and which need to safely transport these messages over a streaming, i.e., possibly fragmenting, channel. Formalizing the desired security properties in terms of confidentiality and integrity in such a setting, we investigate and confirm the security of the widely adopted approach to encode the application's messages into the continuous data stream. Finally, we study a novel paradigm employed in the TLS 1.3 channel design, namely to update the keys used to secure a channel during that channel's lifetime in order to strengthen its security. We propose and formalize the notion of multi-key channels deploying such sequences of keys and capture their advanced security properties in a hierarchical framework of confidentiality and integrity notions. We show that our hierarchy of notions naturally connects to the established notions for single-key channels and instantiate its strongest security notions with a generic AEAD-based construction. Being comparatively close to the TLS 1.3 channel protocol, our construction furthermore enables a comparative design discussion

    On the Tight Security of TLS 1.3: Theoretically-Sound Cryptographic Parameters for Real-World Deployments

    Get PDF
    We consider the theoretically-sound selection of cryptographic parameters, such as the size of algebraic groups or RSA keys, for TLS 1.3 in practice. While prior works gave security proofs for TLS 1.3, their security loss is quadratic in the total number of sessions across all users, which due to the pervasive use of TLS is huge. Therefore, in order to deploy TLS 1.3 in a theoretically-sound way, it would be necessary to compensate this loss with unreasonably large parameters that would be infeasible for practical use at large scale. Hence, while these previous works show that in principle the design of TLS 1.3 is secure in an asymptotic sense, they do not yet provide any useful concrete security guarantees for real-world parameters used in practice. In this work, we provide a new security proof for the cryptographic core of TLS 1.3 in the random oracle model, which reduces the security of TLS 1.3 tightly (that is, with constant security loss) to the (multi-user) security of its building blocks. For some building blocks, such as the symmetric record layer encryption scheme, we can then rely on prior work to establish tight security. For others, such as the RSA-PSS digital signature scheme currently used in TLS 1.3, we obtain at least a linear loss in the number of users, independent of the number of sessions, which is much easier to compensate with reasonable parameters. Our work also shows that by replacing the RSA-PSS scheme with a tightly-secure scheme (e. g., in a future TLS version), one can obtain the first fully tightly-secure TLS protocol. Our results enable a theoretically-sound selection of parameters for TLS 1.3, even in large-scale settings with many users and sessions per user

    User-Controlled Computations in Untrusted Computing Environments

    Get PDF
    Computing infrastructures are challenging and expensive to maintain. This led to the growth of cloud computing with users renting computing resources from centralized cloud providers. There is also a recent promise in providing decentralized computing resources from many participating users across the world. The compute on your own server model hence is no longer prominent. But, traditional computer architectures, which were designed to give a complete power to the owner of the computing infrastructure, continue to be used in deploying these new paradigms. This forces users to completely trust the infrastructure provider on all their data. The cryptography and security community research two different ways to tackle this problem. The first line of research involves developing powerful cryptographic constructs with formal security guarantees. The primitive of functional encryption (FE) formalizes the solutions where the clients do not interact with the sever during the computation. FE enables a user to provide computation-specific secret keys which the server can use to perform the user specified computations (and only those) on her encrypted data. The second line of research involves designing new hardware architectures which remove the infrastructure owner from the trust base. The solutions here tend to have better performance but their security guarantees are not well understood. This thesis provides contributions along both lines of research. In particular, 1) We develop a (single-key) functional encryption construction where the size of secret keys do not grow with the size of descriptions of the computations, while also providing a tighter security reduction to the underlying computational assumption. This construction supports the computation class of branching programs. Previous works for this computation class achieved either short keys or tighter security reductions but not both. 2) We formally model the primitive of trusted hardware inspired by Intel's Software Guard eXtensions (SGX). We then construct an FE scheme in a strong security model using this trusted hardware primitive. We implement this construction in our system Iron and evaluate its performance. Previously, the constructions in this model relied on heavy cryptographic tools and were not practical. 3) We design an encrypted database system StealthDB that provides complete SQL support. StealthDB is built on top of Intel SGX and designed with the usability and security limitations of SGX in mind. The StealthDB implementation on top of Postgres achieves practical performance (30% overhead over plaintext evaluation) with strong leakage profile against adversaries who get snapshot access to the memory of the system. It achieves a more gradual degradation in security against persistent adversaries than the prior designs that aimed at practical performance and complete SQL support. We finally survey the research on providing security against quantum adversaries to the building blocks of SGX

    On Quantum Simulation-Soundness

    Get PDF
    Non-interactive zero-knowledge (NIZK) proof systems are a cornerstone of modern cryptography, but their security has received little attention in the quantum settings. Motivated by improving our understanding of this fundamental primitive against quantum adversaries, we propose a new definition of security against quantum adversary. Specifically, we define the notion of quantum simulation soundness (SS-NIZK), that allows the adversary to access the simulator in superposition. We show a separation between post-quantum and quantum security of SS-NIZK, and prove that both Sahai’s construction for SS-NIZK (in the CRS model) and the Fiat-Shamir transformation (in the QROM) can be made quantumly-simulation-sound. As an immediate application of our new notion, we prove the security of the Naor-Yung paradigm in the quantum settings, with respect to a strong quantum IND-CCA security notion. This provides the quantum analogue of the classical dual key approach to prove the security of encryption schemes. Along the way, we introduce a new notion of quantum-query advantage functions, which may be used as a general framework to show classical/quantum separation for other cryptographic primitives, and it may be of independent interest

    Securing clouds using cryptography and traffic classification

    Get PDF
    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Over the last decade, cloud computing has gained popularity and wide acceptance, especially within the health sector where it offers several advantages such as low costs, flexible processes, and access from anywhere. Although cloud computing is widely used in the health sector, numerous issues remain unresolved. Several studies have attempted to review the state of the art in eHealth cloud privacy and security however, some of these studies are outdated or do not cover certain vital features of cloud security and privacy such as access control, revocation and data recovery plans. This study targets some of these problems and proposes protocols, algorithms and approaches to enhance the security and privacy of cloud computing with particular reference to eHealth clouds. Chapter 2 presents an overview and evaluation of the state of the art in eHealth security and privacy. Chapter 3 introduces different research methods and describes the research design methodology and processes used to carry out the research objectives. Of particular importance are authenticated key exchange and block cipher modes. In Chapter 4, a three-party password-based authenticated key exchange (TPAKE) protocol is presented and its security analysed. The proposed TPAKE protocol shares no plaintext data; all data shared between the parties are either hashed or encrypted. Using the random oracle model (ROM), the security of the proposed TPAKE protocol is formally proven based on the computational Diffie-Hellman (CDH) assumption. Furthermore, the analysis included in this chapter shows that the proposed protocol can ensure perfect forward secrecy and resist many kinds of common attacks such as man-in-the-middle attacks, online and offline dictionary attacks, replay attacks and known key attacks. Chapter 5 proposes a parallel block cipher (PBC) mode in which blocks of cipher are processed in parallel. The results of speed performance tests for this PBC mode in various settings are presented and compared with the standard CBC mode. Compared to the CBC mode, the PBC mode is shown to give execution time savings of 60%. Furthermore, in addition to encryption based on AES 128, the hash value of the data file can be utilised to provide an integrity check. As a result, the PBC mode has a better speed performance while retaining the confidentiality and security provided by the CBC mode. Chapter 6 applies TPAKE and PBC to eHealth clouds. Related work on security, privacy preservation and disaster recovery are reviewed. Next, two approaches focusing on security preservation and privacy preservation, and a disaster recovery plan are proposed. The security preservation approach is a robust means of ensuring the security and integrity of electronic health records and is based on the PBC mode, while the privacy preservation approach is an efficient authentication method which protects the privacy of personal health records and is based on the TPAKE protocol. A discussion about how these integrated approaches and the disaster recovery plan can ensure the reliability and security of cloud projects follows. Distributed denial of service (DDoS) attacks are the second most common cybercrime attacks after information theft. The timely detection and prevention of such attacks in cloud projects are therefore vital, especially for eHealth clouds. Chapter 7 presents a new classification system for detecting and preventing DDoS TCP flood attacks (CS_DDoS) for public clouds, particularly in an eHealth cloud environment. The proposed CS_DDoS system offers a solution for securing stored records by classifying incoming packets and making a decision based on these classification results. During the detection phase, CS_DDOS identifies and determines whether a packet is normal or from an attacker. During the prevention phase, packets classified as malicious are denied access to the cloud service, and the source IP is blacklisted. The performance of the CS_DDoS system is compared using four different classifiers: a least-squares support vector machine (LS-SVM), naïve Bayes, K-nearest-neighbour, and multilayer perceptron. The results show that CS_DDoS yields the best performance when the LS-SVM classifier is used. This combination can detect DDoS TCP flood attacks with an accuracy of approximately 97% and a Kappa coefficient of 0.89 when under attack from a single source, and 94% accuracy and a Kappa coefficient of 0.9 when under attack from multiple attackers. These results are then discussed in terms of the accuracy and time complexity, and are validated using a k-fold cross-validation model. Finally, a method to mitigate DoS attacks in the cloud and reduce excessive energy consumption through managing and limiting certain flows of packets is proposed. Instead of a system shutdown, the proposed method ensures the availability of service. The proposed method manages the incoming packets more effectively by dropping packets from the most frequent requesting sources. This method can process 98.4% of the accepted packets during an attack. Practicality and effectiveness are essential requirements of methods for preserving the privacy and security of data in clouds. The proposed methods successfully secure cloud projects and ensure the availability of services in an efficient way

    Analysis Design & Applications of Cryptographic Building Blocks

    Get PDF
    This thesis deals with the basic design and rigorous analysis of cryptographic schemes and primitives, especially of authenticated encryption schemes, hash functions, and password-hashing schemes. In the last decade, security issues such as the PS3 jailbreak demonstrate that common security notions are rather restrictive, and it seems that they do not model the real world adequately. As a result, in the first part of this work, we introduce a less restrictive security model that is closer to reality. In this model it turned out that existing (on-line) authenticated encryption schemes cannot longer beconsidered secure, i.e. they can guarantee neither data privacy nor data integrity. Therefore, we present two novel authenticated encryption scheme, namely COFFE and McOE, which are not only secure in the standard model but also reasonably secure in our generalized security model, i.e. both preserve full data inegrity. In addition, McOE preserves a resonable level of data privacy. The second part of this thesis starts with proposing the hash function Twister-Pi, a revised version of the accepted SHA-3 candidate Twister. We not only fixed all known security issues of Twister, but also increased the overall soundness of our hash-function design. Furthermore, we present some fundamental groundwork in the area of password-hashing schemes. This research was mainly inspired by the medial omnipresence of password-leakage incidences. We show that the password-hashing scheme scrypt is vulnerable against cache-timing attacks due to the existence of a password-dependent memory-access pattern. Finally, we introduce Catena the first password-hashing scheme that is both memory-consuming and resistant against cache-timing attacks

    Protecting applications using trusted execution environments

    Get PDF
    While cloud computing has been broadly adopted, companies that deal with sensitive data are still reluctant to do so due to privacy concerns or legal restrictions. Vulnerabilities in complex cloud infrastructures, resource sharing among tenants, and malicious insiders pose a real threat to the confidentiality and integrity of sensitive customer data. In recent years trusted execution environments (TEEs), hardware-enforced isolated regions that can protect code and data from the rest of the system, have become available as part of commodity CPUs. However, designing applications for the execution within TEEs requires careful consideration of the elevated threats that come with running in a fully untrusted environment. Interaction with the environment should be minimised, but some cooperation with the untrusted host is required, e.g. for disk and network I/O, via a host interface. Implementing this interface while maintaining the security of sensitive application code and data is a fundamental challenge. This thesis addresses this challenge and discusses how TEEs can be leveraged to secure existing applications efficiently and effectively in untrusted environments. We explore this in the context of three systems that deal with the protection of TEE applications and their host interfaces: SGX-LKL is a library operating system that can run full unmodified applications within TEEs with a minimal general-purpose host interface. By providing broad system support inside the TEE, the reliance on the untrusted host can be reduced to a minimal set of low-level operations that cannot be performed inside the enclave. SGX-LKL provides transparent protection of the host interface and for both disk and network I/O. Glamdring is a framework for the semi-automated partitioning of TEE applications into an untrusted and a trusted compartment. Based on source-level annotations, it uses either dynamic or static code analysis to identify sensitive parts of an application. Taking into account the objectives of a small TCB size and low host interface complexity, it defines an application-specific host interface and generates partitioned application code. EnclaveDB is a secure database using Intel SGX based on a partitioned in-memory database engine. The core of EnclaveDB is its logging and recovery protocol for transaction durability. For this, it relies on the database log managed and persisted by the untrusted database server. EnclaveDB protects against advanced host interface attacks and ensures the confidentiality, integrity, and freshness of sensitive data.Open Acces
    corecore