10 research outputs found

    Проблеми використання SSL/TLS

    Get PDF
    One of the means of creating a secure communication ses-sion is using the SSL/TLS cryptographic protocol, how-ever it does not guarantee full protection and also has its own vulnerabilities and disadvantages, which must be ana-lyzed and eliminated in the future. In particular, in this pa-per the basic terminology is analyzed, vulnerabilities of the protocol are analyzed and generalized, some aspects that make possible implementation of the “man in the middle” attack and it’s variations,the problem of certificates substi-tution and self-signed certificates, authentication defects, application libraries vulnerabilities, key exchange problem, including the Bleichenbacher’s threat, public key infra-structure problems, the problem of interoperability in Ukraine and the most recent vulnerabilities of this protocol are presented (SWEET32, DROWN, ROBOT). The result of the research is the arranged list of unsolved problems and recommendations to increase cryptoresistability level of the protocol.Одним из средств создания защищенного сеанса связи является использование криптографического прото-кола SSL / TLS. Однако он не гарантирует полную за-щищенность и имеет свои уязвимости и недостатки, которые должны быть проанализированы и устранены в дальнейшем. В этой работе проанализирована базо-вая терминология, приведены аспекты, по которым становится возможной атака типа «человек посере-дине», ее вариации, проблема подмены сертификатов и самоподписанных сертификатов, также недостатки аутентификации, уязвимости прикладных библиотек, проблема обмена ключами, в частности исследована уязвимость Блейхенбахера, также упомянуто о про-блемы инфраструктуры открытых ключей, проблема интероперабельности в Украине и недавние уязвимости данного протокола (SWEET32, DROWN, ROBOT). Ре-зультатом исследования является сформированный перечень нерешенных проблем и рекомендаций по повышению уровня криптостойкости протокола.Одним із засобів створення захищеного сеансу зв'язку є використання криптографічного протоколу SSL / TLS. Однак він не гарантує повну захищеність і має свої уразливості і недоліки, які повинні бути проаналізовані і усунені в подальшому. У цій роботі проаналізовано базова термінологія, наведені аспекти, за якими стає можливою атака типу «людина посередині», її варіації, проблема підміни сертифікатів і самоподпісанного сертифікатів, також недоліки аутентифікації, уразливості прикладних бібліотек, проблема обміну ключами, зокрема досліджена вразливість Блейхенбахера, також згадано про проблеми інфраструктури відкритих ключів, проблема інтероперабельності в Україні та недавні уразливості даного протоколу (SWEET32, DROWN, ROBOT). Результатом дослідження є сформований перелік невирішених проблем і рекомендацій щодо підвищення рівня криптостійкості протоколу

    A First Look at QUIC in the Wild

    Full text link
    For the first time since the establishment of TCP and UDP, the Internet transport layer is subject to a major change by the introduction of QUIC. Initiated by Google in 2012, QUIC provides a reliable, connection-oriented low-latency and fully encrypted transport. In this paper, we provide the first broad assessment of QUIC usage in the wild. We monitor the entire IPv4 address space since August 2016 and about 46% of the DNS namespace to detected QUIC-capable infrastructures. Our scans show that the number of QUIC-capable IPs has more than tripled since then to over 617.59 K. We find around 161K domains hosted on QUIC-enabled infrastructure, but only 15K of them present valid certificates over QUIC. Second, we analyze one year of traffic traces provided by MAWI, one day of a major European tier-1 ISP and from a large IXP to understand the dominance of QUIC in the Internet traffic mix. We find QUIC to account for 2.6% to 9.1% of the current Internet traffic, depending on the vantage point. This share is dominated by Google pushing up to 42.1% of its traffic via QUIC

    Analysis of QUIC Session Establishment and its Implementations

    Get PDF
    International audienceIn the recent years, the major web companies have been working to improve the user experience and to secure the communications between their users and the services they provide. QUIC is such an initiative, and it is currently being designed by the IETF. In a nutshell, QUIC originally intended to merge features from TCP/SCTP, TLS 1.3 and HTTP/2 into one big protocol. The current specification proposes a more modular definition, where each feature (transport, cryptography, application, packet reemission) are defined in separate internet drafts. We studied the QUIC internet drafts related to the transport and cryptographic layers, from version 18 to version 23, and focused on the connection establishment with existing implementations. We propose a first implementation of QUIC connection establishment using Scapy, which allowed us to forge a critical opinion of the current specification, with a special focus on the induced difficulties in the implementation. With our simple stack, we also tested the behaviour of the existing implementations with regards to security-related constraints (explicit or implicit) from the internet drafts. This gives us an interesting view of the state of QUIC implementations

    Security in a Distributed Key Management Approach

    Get PDF

    Postcards from the post-HTTP world: Amplification of HTTPS vulnerabilities in the web ecosystem

    Get PDF
    HTTPS aims at securing communication over the Web by providing a cryptographic protection layer that ensures the confidentiality and integrity of communication and enables client/server authentication. However, HTTPS is based on the SSL/TLS protocol suites that have been shown to be vulnerable to various attacks in the years. This has required fixes and mitigations both in the servers and in the browsers, producing a complicated mixture of protocol versions and implementations in the wild, which makes it unclear which attacks are still effective on the modern Web and what is their import on web application security. In this paper, we present the first systematic quantitative evaluation of web application insecurity due to cryptographic vulnerabilities. We specify attack conditions against TLS using attack trees and we crawl the Alexa Top 10k to assess the import of these issues on page integrity, authentication credentials and web tracking. Our results show that the security of a consistent number of websites is severely harmed by cryptographic weaknesses that, in many cases, are due to external or related-domain hosts. This empirically, yet systematically demonstrates how a relatively limited number of exploitable HTTPS vulnerabilities are amplified by the complexity of the web ecosystem

    Verified Models and Reference Implementations for the TLS 1.3 Standard Candidate

    Get PDF
    International audienceTLS 1.3 is the next version of the Transport Layer Security (TLS) protocol. Its clean-slate design is a reaction both to the increasing demand for low-latency HTTPS connections and to a series of recent high-profile attacks on TLS. The hope is that a fresh protocol with modern cryptography will prevent legacy problems; the danger is that it will expose new kinds of attacks, or reintroduce old flaws that were fixed in previous versions of TLS. After 18 drafts, the protocol is nearing completion, and the working group has appealed to researchers to analyze the protocol before publication. This paper responds by presenting a comprehensive analysis of the TLS 1.3 Draft-18 protocol. We seek to answer three questions that have not been fully addressed in previous work on TLS 1.3: (1) Does TLS 1.3 prevent well-known attacks on TLS 1.2, such as Logjam or the Triple Handshake, even if it is run in parallel with TLS 1.2? (2) Can we mechanically verify the computational security of TLS 1.3 under standard (strong) assumptions on its cryptographic primitives? (3) How can we extend the guarantees of the TLS 1.3 protocol to the details of its implementations? To answer these questions, we propose a methodology for developing verified symbolic and computational models of TLS 1.3 hand-in-hand with a high-assurance reference implementation of the protocol. We present symbolic ProVerif models for various intermediate versions of TLS 1.3 and evaluate them against a rich class of attacks to reconstruct both known and previously unpublished vulnerabilities that influenced the current design of the protocol. We present a computational CryptoVerif model for TLS 1.3 Draft-18 and prove its security. We present RefTLS, an interoperable implementation of TLS 1.0-1.3 and automatically analyze its protocol core by extracting a ProVerif model from its typed JavaScript code

    Evaluación del uso del protocolo QUIC en internet

    Get PDF
    En este documento se evalúa el uso del protocolo QUIC utilizando como referencia algunos de los servidores más usados de Internet. Para exponer la información de una manera clara, primero se realiza un estudio del arte en el que se analiza y se explica brevemente el funcionamiento de los protocolos más importantes que gestionan el tráfico web en los últimos años (TCP, TLS, HTTP…). Tras este estudio, se puede ver que hay distintas limitaciones de prestaciones cuando se usan estos protocolos para servir tráfico web. Es en este contexto en el que surge el protocolo QUIC, para solucionar estas limitaciones y mejorar la transferencia de contenidos en paralelo y la latencia en las comunicaciones. QUIC es un proyecto originado por la empresa Google y ya utilizado por dicha empresa. Sin embargo, hay otra versión que está siendo estandarizada por el IETF. Aunque son muy similares en cuanto a funcionamiento, tienen algunas diferencias que se explicarán en este trabajo. Además, se describe el diseño del protocolo y sus distintos mecanismos de control de flujo, control de congestión, seguridad… Una vez entendidos estos conceptos, el lector ya tiene una buena idea del funcionamiento del protocolo y puede entender los problemas que QUIC soluciona y las mejoras que aporta al tráfico en Internet. Por último, de forma práctica se evalúa el uso de este protocolo en los servidores web más utilizados y se llevan a cabo diferentes medidas y comprobaciones mediante el analizador de tráfico Wireshark. También se describe brevemente el uso de QUIC en los navegadores web dominantes.Ingeniería en Tecnologías de Telecomunicación (Plan 2010

    Modèles vérifiés et implémentations de référence pour le candidat standard TLS 1.3

    Get PDF
    TLS 1.3 is the next version of the Transport Layer Security (TLS) protocol. Its clean-slate design is a reaction both to the increasing demand for low-latency HTTPS connections and to a series of recent high-profile attacks on TLS. The hope is that a fresh protocol with modern cryptography will prevent legacy problems; the danger is that it will expose new kinds of attacks, or reintroduce old flaws that were fixed in previous versions of TLS. After 18 drafts, the protocol is nearing completion, and the working group has appealed to researchers to analyze the protocol before publication. This paper responds by presenting a comprehensive analysis of the TLS 1.3 Draft-18 protocol.We seek to answer three questions that have not been fully addressed in previous work on TLS 1.3: (1) Does TLS 1.3 prevent well-known attacks on TLS 1.2, such as Logjam or the Triple Handshake, even if it is run in parallel with TLS 1.2? (2) Can we mechanically verify the computational security of TLS 1.3 under standard (strong) assumptions on its cryptographic primitives? (3) How can we extend the guarantees of the TLS 1.3 protocol to the details of its implementations?To answer these questions, we propose a methodology for developing verified symbolic and computational models of TLS 1.3 hand-in-hand with a high-assurance reference implementation of the protocol. We present symbolic ProVerif models for various intermediate versions of TLS 1.3 and evaluate them against a rich class of attacks to reconstruct both known and previously unpublished vulnerabilities that influenced the current design of the protocol. We present a computational CryptoVerif model for TLS 1.3 Draft-18 and prove its security. We present RefTLS, an interoperable implementation of TLS 1.0-1.3 and automatically analyze its protocol core by extracting a ProVerif model from its typed JavaScript code.TLS 1.3 est la prochaine version du protocole TLS (Transport Layer Security). Sa conception à partir de zéro est une réaction à la fois à la demande croissante de connexions HTTPS à faible latence et à une série d'attaques récentes de haut niveau sur TLS. L'espoir est qu'un nouveau protocole avec de la cryptographie moderne éviterait d'hériter des problèmes des versions précédentes; le danger est que cela pourrait exposer à de nouveaux types d'attaques ou réintroduire d'anciens défauts corrigés dans les versions précédentes de TLS. Après 18 versions préliminaires, le protocole est presque terminé, et le groupe de travail a appelé les chercheurs à analyser le protocole avant publication. Cet article répond en présentant une analyse globale du protocole TLS 1.3 Draft-18.Nous cherchons à répondre à trois questions qui n'ont pas été entièrement traitées dans les travaux antérieurs sur TLS 1.3: (1) TLS 1.3 empêche-t-il les attaques connues sur TLS 1.2, comme Logjam ou Triple Handshake, même s'il est exécuté en parallèle avec TLS 1.2 ? (2) Peut-on vérifier mécaniquement la sécurité calculatoire de TLS 1.3 sous des hypothèses standard (fortes) sur ses primitives cryptographiques? (3) Comment pouvons-nous étendre les garanties du protocole TLS 1.3 aux détails de ses implémentations?Pour répondre à ces questions, nous proposons une méthodologie pour développer des modèles symboliques et calculatoires vérifiés de TLS 1.3 en même temps qu'une implémentation de référence du protocole. Nous présentons des modèles symboliques dans ProVerif pour différentes versions intermédiaires de TLS 1.3 et nous les évaluons contre une riche classe d'attaques, pour reconstituer à la fois des vulnérabilités connues et des vulnérabilités précédemment non publiées qui ont influencé la conception actuelle du protocole. Nous présentons un modèle calculatoire dans CryptoVerif de TLS 1.3 Draft-18 et prouvons sa sécurité. Nous présentons RefTLS, une implémentation interopérable de TLS 1.0-1.3 et analysons automatiquement le coeur de son protocole en extrayant un modèle ProVerif à partir de son code JavaScript typé

    Using Large-Scale Empirical Methods to Understand Fragile Cryptographic Ecosystems

    Full text link
    Cryptography is a key component of the security of the Internet. Unfortunately, the process of using cryptography to secure the Internet is fraught with failure. Cryptography is often fragile, as a single mistake can have devastating consequences on security, and this fragility is further complicated by the diverse and distributed nature of the Internet. This dissertation shows how to use empirical methods in the form of Internet-wide scanning to study how cryptography is deployed on the Internet, and shows this methodology can discover vulnerabilities and gain insights into fragile cryptographic ecosystems that are not possible without an empirical approach. I introduce improvements to ZMap, the fast Internet-wide scanner, that allow it to fully utilize a 10 GigE connection, and then use Internet-wide scanning to measure cryptography on the Internet. First, I study how Diffie-Hellman is deployed, and show that implementations are fragile and not resilient to small subgroup attacks. Next, I measure the prevalence of ``export-grade'' cryptography. Although regulations limiting the strength of cryptography that could be exported from the United States were lifted in 1999, Internet-wide scanning shows that support for various forms of export cryptography remains widespread. I show how purposefully weakening TLS to comply with these export regulations led to the FREAK, Logjam, and DROWN vulnerabilities, each of which exploits obsolete export-grade cryptography to attack modern clients. I conclude by discussing how empirical cryptography improved protocol design, and I present further opportunities for empirical research in cryptography.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149809/1/davadria_1.pd

    Modeling Advanced Security Aspects of Key Exchange and Secure Channel Protocols

    Get PDF
    Secure communication has become an essential ingredient of our daily life. Mostly unnoticed, cryptography is protecting our interactions today when we read emails or do banking over the Internet, withdraw cash at an ATM, or chat with friends on our smartphone. Security in such communication is enabled through two components. First, two parties that wish to communicate securely engage in a key exchange protocol in order to establish a shared secret key known only to them. The established key is then used in a follow-up secure channel protocol in order to protect the actual data communicated against eavesdropping or malicious modification on the way. In modern cryptography, security is formalized through abstract mathematical security models which describe the considered class of attacks a cryptographic system is supposed to withstand. Such models enable formal reasoning that no attacker can, in reasonable time, break the security of a system assuming the security of its underlying building blocks or that certain mathematical problems are hard to solve. Given that the assumptions made are valid, security proofs in that sense hence rule out a certain class of attackers with well-defined capabilities. In order for such results to be meaningful for the actually deployed cryptographic systems, it is of utmost importance that security models capture the system's behavior and threats faced in that 'real world' as accurately as possible, yet not be overly demanding in order to still allow for efficient constructions. If a security model fails to capture a realistic attack in practice, such an attack remains viable on a cryptographic system despite a proof of security in that model, at worst voiding the system's overall practical security. In this thesis, we reconsider the established security models for key exchange and secure channel protocols. To this end, we study novel and advanced security aspects that have been introduced in recent designs of some of the most important security protocols deployed, or that escaped a formal treatment so far. We introduce enhanced security models in order to capture these advanced aspects and apply them to analyze the security of major practical key exchange and secure channel protocols, either directly or through comparatively close generic protocol designs. Key exchange protocols have so far always been understood as establishing a single secret key, and then terminating their operation. This changed in recent practical designs, specifically of Google's QUIC ("Quick UDP Internet Connections") protocol and the upcoming version 1.3 of the Transport Layer Security (TLS) protocol, the latter being the de-facto standard for security protocols. Both protocols derive multiple keys in what we formalize in this thesis as a multi-stage key exchange (MSKE) protocol, with the derived keys potentially depending on each other and differing in cryptographic strength. Our MSKE security model allows us to capture such dependencies and differences between all keys established in a single framework. In this thesis, we apply our model to assess the security of both the QUIC and the TLS 1.3 key exchange design. For QUIC, we are able to confirm the intended overall security but at the same time highlight an undesirable dependency between the two keys QUIC derives. For TLS 1.3, we begin by analyzing the main key exchange mode as well as a reduced resumption mode. Our analysis attests that TLS 1.3 achieves strong security for all keys derived without undesired dependencies, in particular confirming several of this new TLS version's design goals. We then also compare the QUIC and TLS 1.3 designs with respect to a novel 'zero round-trip time' key exchange mode establishing an initial key with minimal latency, studying how differences in these designs affect the achievable key exchange security. As this thesis' last contribution in the realm of key exchange, we formalize the notion of key confirmation which ensures one party in a key exchange execution that the other party indeed holds the same key. Despite being frequently mentioned in practical protocol specifications, key confirmation was never comprehensively treated so far. In particular, our formalization exposes an inherent, slight difference in the confirmation guarantees both communication partners can obtain and enables us to analyze the key confirmation properties of TLS 1.3. Secure channels have so far been modeled as protecting a sequence of distinct messages using a single secret key. Our first contribution in the realm of channels originates from the observation that, in practice, secure channel protocols like TLS actually do not allow an application to transmit distinct, or atomic, messages. Instead, they provide applications with a streaming interface to transmit a stream of bits without any inherent demarcation of individual messages. Necessarily, the security guarantees of such an interface differ significantly from those considered in cryptographic models so far. In particular, messages may be fragmented in transport, and the recipient may obtain the sent stream in a different fragmentation, which has in the past led to confusion and practical attacks on major application protocol implementations. In this thesis, we formalize such stream-based channels and introduce corresponding security notions of confidentiality and integrity capturing the inherently increased complexity. We then present a generic construction of a stream-based channel based on authenticated encryption with associated data (AEAD) that achieves the strongest security notions in our model and serves as validation of the similar TLS channel design. We also study the security of such applications whose messages are inherently atomic and which need to safely transport these messages over a streaming, i.e., possibly fragmenting, channel. Formalizing the desired security properties in terms of confidentiality and integrity in such a setting, we investigate and confirm the security of the widely adopted approach to encode the application's messages into the continuous data stream. Finally, we study a novel paradigm employed in the TLS 1.3 channel design, namely to update the keys used to secure a channel during that channel's lifetime in order to strengthen its security. We propose and formalize the notion of multi-key channels deploying such sequences of keys and capture their advanced security properties in a hierarchical framework of confidentiality and integrity notions. We show that our hierarchy of notions naturally connects to the established notions for single-key channels and instantiate its strongest security notions with a generic AEAD-based construction. Being comparatively close to the TLS 1.3 channel protocol, our construction furthermore enables a comparative design discussion
    corecore