35 research outputs found

    HTTP/2: Analysis and measurements

    Get PDF
    The upgrade of HTTP, the protocol that powers the Internet of the people, was published as RFC on May of 2015. HTTP/2 aims to improve the users experience by solving wellknow problems of HTTP/1.1 and also introducing new features. The main goal of this project is to study HTTP/2 protocol, the support in the software, its deployment and implementation on the Internet and how the network reacts to an upgrade of the existing protocol. To shed some light on this question we build two experiments. We build a crawler to monitor the HTTP/2 adoption across Internet using the Alexa top 1 million websites as sample. We find that 22,653 servers announce support for HTTP/2, but only 10,162 websites are served over it. The support for HTTP/2 Upgrade is minimal, just 16 servers support it and only 10 of them load the content of the websites over HTTP/2 on plain TCP. Motivated by those numbers, we investigate how the new protocol behaves with the middleboxes along the path in the network. We build a platform to evaluate it across 67 different ports for TLS connections, HTTP/2 Upgrade and over plain TCP. Considering both fixed line and mobile network, we use a crowdsourcing platform to recruit users. Middleboxes affect HTTP/2, especially on port 80 for plain TCP connections. HTTP/2 Upgrades requests are affected by proxies, failing to upgrade to the new protocol. Over TLS on port 443 on the other hand, all the connections are successful.Ingeniería Técnica en Sistemas de Telecomunicació

    Security Analysis of HTTP/2 Protocol

    Get PDF
    abstract: The Internet traffic, today, comprises majorly of Hyper Text Transfer Protocol (HTTP). The first version of HTTP protocol was standardized in 1991, followed by a major upgrade in May 2015. HTTP/2 is the next generation of HTTP protocol that promises to resolve short-comings of HTTP 1.1 and provide features to greatly improve upon its performance. There has been a 1000\% increase in the cyber crimes rate over the past two years. Since HTTP/2 is a relatively new protocol with a very high acceptance rate (around 68\% of all HTTPS traffic), it gives rise to an urgent need of analyzing this protocol from a security vulnerability perspective. In this thesis, I have systematically analyzed the security concerns in HTTP/2 protocol - starting from the specifications, testing all variation of frames (basic entity in HTTP/2 protocol) and every new introduced feature. In this thesis, I also propose the Context Aware fuzz Testing for Binary communication protocols methodology. Using this testing methodology, I was able to discover a serious security susceptibility using which an attacker can carry out a denial-of-service attack on ApacheDissertation/ThesisMasters Thesis Computer Science 201

    Analysis of QUIC Session Establishment and its Implementations

    Get PDF
    International audienceIn the recent years, the major web companies have been working to improve the user experience and to secure the communications between their users and the services they provide. QUIC is such an initiative, and it is currently being designed by the IETF. In a nutshell, QUIC originally intended to merge features from TCP/SCTP, TLS 1.3 and HTTP/2 into one big protocol. The current specification proposes a more modular definition, where each feature (transport, cryptography, application, packet reemission) are defined in separate internet drafts. We studied the QUIC internet drafts related to the transport and cryptographic layers, from version 18 to version 23, and focused on the connection establishment with existing implementations. We propose a first implementation of QUIC connection establishment using Scapy, which allowed us to forge a critical opinion of the current specification, with a special focus on the induced difficulties in the implementation. With our simple stack, we also tested the behaviour of the existing implementations with regards to security-related constraints (explicit or implicit) from the internet drafts. This gives us an interesting view of the state of QUIC implementations

    Informing protocol design through crowdsourcing measurements

    Get PDF
    Mención Internacional en el título de doctorMiddleboxes, such as proxies, firewalls and NATs play an important role in the modern Internet ecosystem. On one hand, they perform advanced functions, e.g. traffic shaping, security or enhancing application performance. On the other hand, they turn the Internet into a hostile ecosystem for innovation, as they limit the deviation from deployed protocols. It is therefore essential, when designing a new protocol, to first understand its interaction with the elements of the path. The emerging area of crowdsourcing solutions can help to shed light on this issue. Such approach allows us to reach large and different sets of users and also different types of devices and networks to perform Internet measurements. In this thesis, we show how to make informed protocol design choices by expanding the traditional crowdsourcing focus from the human element and using crowdsourcing large scale measurement platforms. We consider specific use cases, namely the case of pervasive encryption in the modern Internet, TCP Fast Open and ECN++. We consider such use cases to advance the global understanding on whether wide adoption of encryption is possible in today’s Internet or the adoption of encryption is necessary to guarantee the proper functioning of HTTP/2. We target ECN and particularly ECN++, given its succession of deployment problems. We then measured ECN deployment over mobile as well as fixed networks. In the process, we discovered some bad news for the base ECN protocol—more than half the mobile carriers we tested wipe the ECN field at the first upstream hop. This thesis also reports the good news that, wherever ECN gets through, we found no deployment problems for the ECN++ enhancement. The thesis includes the results of other more in-depth tests to check whether servers that claim to support ECN, actually respond correctly to explicit congestion feedback, including some surprising congestion behaviour unrelated to ECN. This thesis also explores the possible causes that ossify the modern Internet and make difficult the advancement of the innovation. Network Address Translators (NATs) are a commonplace in the Internet nowadays. It is fair to say that most of the residential and mobile users are connected to the Internet through one or more NATs. As any other technology, NAT presents upsides and downsides. Probably the most acknowledged downside of the NAT technology is that it introduces additional difficulties for some applications such as peer-to-peer applications, gaming and others to function properly. This is partially due to the nature of the NAT technology but also due to the diversity of behaviors of the different NAT implementations deployed in the Internet. Understanding the properties of the currently deployed NAT base provides useful input for application and protocol developers regarding what to expect when deploying new application in the Internet. We develop NATwatcher, a tool to test NAT boxes using a crowdsourcingbased measurement methodology. We also perform large scale active measurement campaigns to detect CGNs in fixed broadband networks using NAT Revelio, a tool we have developed and validated. Revelio enables us to actively determine from within residential networks the type of upstream network address translation, namely NAT at the home gateway (customer-grade NAT) or NAT in the ISP (Carrier Grade NAT). We deploy Revelio in the FCC Measuring Broadband America testbed operated by SamKnows and also in the RIPE Atlas testbed. A part of this thesis focuses on characterizing CGNs in Mobile Network Operators (MNOs). We develop a measuring tool, called CGNWatcher that executes a number of active tests to fully characterize CGN deployments in MNOs. The CGNWatcher tool systematically tests more than 30 behavioural requirements of NATs defined by the Internet Engineering Task Force (IETF) and also multiple CGN behavioural metrics. We deploy CGNWatcher in MONROE and performed large measurement campaigns to characterize the real CGN deployments of the MNOs serving the MONROE nodes. We perform a large measurement campaign using the tools described above, recruiting over 6,000 users, from 65 different countries and over 280 ISPs. We validate our results with the ISPs at the IP level and, reported to the ground truth we collected. To the best of our knowledge, this represents the largest active measurement study of (confirmed) NAT or CGN deployments at the IP level in fixed and mobile networks to date. As part of the thesis, we characterize roaming across Europe. The goal of the experiment was to try to understand if the MNO changes CGN while roaming, for this reason, we run a series of measurements that enable us to identify the roaming setup, infer the network configuration for the 16 MNOs that we measure and quantify the end-user performance for the roaming configurations which we detect. We build a unique roaming measurement platform deployed in six countries across Europe. Using this platform, we measure different aspects of international roaming in 3G and 4G networks, including mobile network configuration, performance characteristics, and content discrimination. We find that operators adopt common approaches to implementing roaming, resulting in additional latency penalties of 60 ms or more, depending on geographical distance. Considering content accessibility, roaming poses additional constraints that leads to only minimal deviations when accessing content in the original country. However, geographical restrictions in the visited country make the picture more complicated and less intuitive. Results included in this thesis would provide useful input for application, protocol designers, ISPs and researchers that aim to make their applications and protocols to work across the modern Internet.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Gonzalo Camarillo González.- Secretario: María Carmen Guerrero López.- Vocal: Andrés García Saavedr

    Encrypted Web Traffic Classification Using Deep Learning

    Get PDF
    Traffic classification is essential in network management for operations ranging from capacity planning, performance monitoring, volumetry, and resource provisioning, to anomaly detection and security. Recently, it has become increasingly challenging with the widespread adoption of encryption in the Internet, e.g., as a de-facto in HTTP/2 and QUIC protocols. In the current state of encrypted traffic classification using Deep Learning (DL), we identify fundamental issues in the way it is typically approached. For instance, although complex DL models with millions of parameters are being used, these models implement a relatively simple logic based on certain header fields of the TLS handshake, limiting model robustness to future versions of encrypted protocols. Furthermore, encrypted traffic is often treated as any other raw input for DL, while crucial domain-specific considerations exist that are commonly ignored. In this thesis, we design a novel feature engineering approach that generalizes well for encrypted web protocols, and develop a neural network architecture based on Stacked Long Short-Term Memory (LSTM) layers and Convolutional Neural Networks (CNN) that works very well with our feature design. We evaluate our approach on a real-world traffic dataset from a major ISP and Mobile Network Operator. We achieve an accuracy of 95% in service-level classification with less raw traffic and smaller number of parameters, out-performing a state-of-the-art method by nearly 50% fewer false classifications. We show that our DL model generalizes for different classification objectives and encrypted web protocols. We also evaluate our approach on a public QUIC dataset with finer and application-level granularity in labeling, achieving an overall accuracy of 99%

    Experimental Validation of TRAC-RELAP Advanced Computational Engine (TRACE) for Simplified, Integral, Rapid-Condensation-Driven Transient

    Get PDF
    The purpose of the present work is to experimentally validate the TRACE (TRAC-RELAP Advanced Computational Engine) plug-in of the Nuclear Regulatory Commission's (NRC's) Symbolic Nuclear Analysis Package (SNAP) for rapid condensation transients. These transients are challenging for the code. The experimental phase began by constructing and calibrating a simplified, integral, condensation-driven transient apparatus named the UMD-USNA Near One-dimensional Transient Experimental Assembly (MANOTEA). Then, a series of 5 well-defined transients were run. Data from the facility included: pressure, differential pressure, and temperature, all as a function of time. Using the data, mass and energy balances were closed for each experiment. Some of the relevant characteristics of the data included: inverted thermal stratification and nozzle dependent transients controlled by an energy partition. A common transient sequence was identified and served as the fundamental comparison to evaluate TRACE. The second phase began by developing a 1-dimensional Base TRACE Model. Output from the Base Model was found to over-estimate the pressures and temperatures observed in the experiment. This Model always predicted that the condenser pipe would fill, and that transients would terminate with a non-physical discontinuity. In an effort to improve the model, a list of phenomena was generated and then mapped to TRACE parameters. The goal was to find unique ways to capture the energy partition and prevent the condenser from filling. Over 250 TRACE cases were run, and the effective and physically justifiable parameters were incorporated into a 3-dimensional Final TRACE Model. The Final Model incorporated non-condensable gases, which provided a mechanism to terminate the transients smoothly. Replacing the PIPE component with a VESSEL component provided a way to model the energy partition. The Final Model under-predicted trends observed in the experiments. Thus, the two models were able to bracket the experimental data. Comparing TRACE output to the data led to the conclusion that the code's condensation model is over-stated. TRACE's predecessors were also known to have over-stated condensation models. As a result, TRACE will over-predict condensation-induced fluid motion when modeling several thermal-hydraulic situations important to safe nuclear reactor operation. Future work could focus on developing a NOZZLE component for TRACE, comparing the subsequent output to MANOTEA data and improving the TRACE condensation model

    Dissecting HTTP/2 and QUIC : measurement, evaluation and optimization

    Get PDF
    Tesi en cotutel·la: Universitat Politècnica de Catalunya i Université catholique de LouvainThe Internet is evolving from the perspective of both usage and connectivity. The meteoric rise of smartphones has not only facilitated connectivity for the masses, it has also increased their appetite for more responsive applications. The widespread availability of wireless networks has caused a paradigm shift in the way we access the Internet. This shift has resulted in a new trend where traditional applications are getting migrated to the cloud, e.g., Microsoft Office 365, Google Apps etc. As a result, modern web content has become extremely complex and requires efficient web delivery protocols to maintain users’ experience regardless of the technology they use to connect to the Internet and despite variations in the quality of users’ Internet connectivity. To achieve this goal, efforts have been put into optimizing existing web and transport protocols, designing new low latency transport protocols and introducing enhance- ments in the WiFi MAC layer. In recent years, several improvements have been introduced in the HTTP protocol resulting in the HTTP/2 standard which allows more efficient use of network resources and a reduced perception of latency. QUIC transport protocol is another example of these ambitious efforts. Initially developed by Google as an experiment, the protocol has already made phenomenal strides, thanks to its support in Google’s servers and Chrome browser. However there is a lack of sufficient understanding and evaluation of these new protocols across a range of environments, which opens new opportunities for research in this direction. This thesis provides a comprehensive study on the behavior, usage and performance of HTTP/2 and QUIC, and advances them by implementing several optimizations. First, in order to understand the behavior of HTTP/1 and HTTP/2 traffic we analyze datasets of passive measurements collected in various operational networks and discover that they have very different characteristics. This calls for a reappraisal of traffic models, as well as HTTP traffic simulation and benchmarking approaches that were built on the understanding of HTTP/1 traffic only and may no longer be valid for modern web traffic. We develop a machine learning-based method compatible with existing flow monitoring systems for the classification of encrypted web traffic into appropriate HTTP versions. This will enable network administrators to identify HTTP/1 and HTTP/2 flows for network managements tasks such as traffic shaping or prioritization. We also investigate the behavior of HTTP/2 stream multiplexing in the wild. We devise a methodology for analysis of large datasets of network traffic comprising over 200 million flows to quantify the usage of H2 multiplexing in the wild and to understand its implications for network infrastructure. Next, we show with the help of emulations that HTTP/2 exhibits poor performance in adverse scenarios such as under high packet losses or network congestion. We confirm that the use of a single connection sometimes impairs application performance of HTTP/2 and implement an optimization in Chromium browser to make it more robust in such scenarios. Finally, we collect and analyze QUIC and TCP traffic in a production wireless mesh network. Our results show that while QUIC outperforms TCP in fixed networks, it exhibits significantly lower performance than TCP when there are wireless links in the end-to-end path. To see why this is the case, we carefully examine how delay variations which are common in wireless networks impact the congestion control and loss detection algorithms of QUIC. We also explore the interaction of QUIC transport with the advanced link layer features of WiFi such as frame aggregation. We fine-tune QUIC based on our findings and show notable increase in performance.Internet está evolucionando desde la perspectiva del uso y la conectividad. El ascenso meteórico de los teléfonos inteligentes no solo ha facilitado la conectividad para las masas, sino que también ha aumentado su apetito por aplicaciones más exigentes. La disponibilidad generalizada de las redes inalámbricas ha provocado un cambio de paradigma en la forma en que accedemos a Internet. Este cambio ha dado lugar a una nueva tendencia en la que las aplicaciones tradicionales se están migrando a la nube. Como resultado, el contenido web moderno se ha vuelto extremadamente complejo y requiere protocolos de entrega web eficientes para mantener la calidad de experiencia de los usuarios. Para lograr este objetivo, se han realizado esfuerzos para optimizar los protocolos web y de transporte existentes, diseñar nuevos protocolos de transporte de baja latencia e introducir mejo-ras en la capa MAC de WiFi. En los últimos años, se han introducido varias mejoras en el proto-colo HTTP que dan como resultado el estándar HTTP/2 que permite un uso más eficiente de los recursos de la red y una menor percepción de la latencia. El protocolo de transporte QUIC es otro ejemplo de estos esfuerzos ambiciosos. Inicialmente desarrollado por Google como un experi-mento, el protocolo ya ha hecho grandes avances, gracias a su soporte en los servidores de Google y el navegador Chrome. Esta tesis proporciona un estudio exhaustivo sobre el comportamiento, uso y rendimiento de HTTP/2 y QUIC, y los mejora mediante la implementación de varias optimizaciones. Primero, para comprender el comportamiento del tráfico HTTP/1 y HTTP/2, analizamos los conjuntos de datos de mediciones pasivas recopiladas en varias redes operativas y descubrimos que tienen carac-terísticas muy diferentes. Esto requiere una reevaluación de los modelos de tráfico, así como los métodos de simulación y evaluación comparativa del tráfico HTTP que se desarrollaron en el es-tudio hecho anteriormente sólo considerando el tráfico HTTP/1, y que ya no sean válidos para el tráfico web moderno. Desarrollamos un método basado en aprendizaje automático compatible con los sistemas de monitoreo de flujo existentes para la clasificación del tráfico web encriptado en las versiones HTTP. Esto permitirá a los administradores de red identificar los flujos de HTTP/1 y HTTP/2 para las tareas de administración de red, como la configuración del tráfico o la prior-ización. También investigamos el comportamiento de la multiplexación de flujos HTTP/2. Dise-ñamos una metodología para el análisis de grandes conjuntos de datos de tráfico de red que comprende más de 200 millones de flujos para cuantificar el uso de la multiplexación HTTP/2 y para comprender sus implicaciones para la infraestructura de red. A continuación, mostramos con la ayuda de las emulaciones que HTTP/2 muestra un rendimiento deficiente en escenarios adversos, como por ejemplo, una gran pérdida de paquetes o la conges-tión de la red. Confirmamos que el uso de una sola conexión a veces perjudica el rendimiento de la aplicación de HTTP/2 e implementamos una optimización en el navegador Chromium para ha-cerlo más robusto en tales escenarios. Finalmente, recopilamos y analizamos el tráfico de QUIC y TCP en una red de malla inalámbrica en producción. Nuestros resultados muestran que, si bien QUIC supera a TCP en redes fijas, puede presentar un rendimiento significativamente menor que TCP cuando hay enlaces inalám-bricos en la ruta de extremo a extremo. Para ver por qué ocurre, examinamos cuidadosamente cómo las variaciones de retardo, que son comunes en las redes inalámbricas, afectan el control de congestión y los algoritmos de detección de pérdida de QUIC. También exploramos la interacción de QUIC con las características avanzadas de la capa de enlace de WiFi, como la agregación de tramas. Ajustando QUIC en función de nuestros hallazgos mostramos que puede conseguirse un notable aumento en el rendimientoPostprint (published version

    Native Web Communication Protocols and Their Effects on the Performance of Web Services and Systems

    Get PDF
    Native Web communication protocols are the pivotal components of Web services, applications and systems. In particular, HTTP is a de facto protocol standard used in almost all Web services and systems. Consequently, it is one of the crucial protocols responsible for the performance of Web services and systems. HTTP/1.1 has been successfully deployed in Web services and systems for the last two decades. However, one of the most significant issues with HTTP/1.1 is the Round Trip Time and Web latency. To resolve this issue, two successor protocols SPDY and HTTP/2 have been developed recently, with some studies suggesting that SPDY improved the performance of Web services and systems, whilst some did not find significant improvements in the performance. HTTP/2 is a relatively new protocol and has yet to be tested with any rigour. Therefore, it is important to investigate the effects of these two enhanced protocols SPDY and HTTP/2 on the performance of Web services and systems. This paper conducts a number of practical investigations to evaluate the performance of Web services and systems with and without the support of SPDY and HTTP/2 protocols at the client and server. This study investigates the impact of SPDY and HTTP/2 on the overall performance of Web services and systems from the end-user's perspective

    The impact of passive safety systems on desirability of advanced light water reactors

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2006.Includes bibliographical references (leaves 121-123).This work investigates whether the advanced light water reactor designs with passive safety systems are more desirable than advanced reactor designs with active safety systems from the point of view of uncertainty in the performance of safety systems as well as the economic implications of the passive safety systems. Two advanced pressurized water reactors and two advanced boiling water reactors, one representing passive reactors and the other active reactors for each type of coolant, are compared in terms of operation and responses to accidents as reported by the vendors. Considering a simplified decay heat removal system that utilizes an isolation condenser for decay heat removal, the uncertainty in the main parameters affecting the system performance upon a reactor isolation accident is characterized when the system is to rely on natural convection and when it is to rely on a pump to remove the core heat. It is found that the passive system is less certain in its performance if the pump of the active system is tested at least once every five months. In addition, a cost model is used to evaluate the economic differences and benefits between the active and passive reactors. It is found that while the passive systems could have the benefit of fewer components to inspect and maintain during operation, they do suffer from a larger uncertainty about the time that would be required for their licensing due to more limited data on the reliability of their operation. Finally, a survey among nuclear energy experts with a variety of affiliations was conducted to determine the current professional attitude towards these two competing nuclear design options. The results of the survey show that reactors with passive safety systems are more desirable among the surveyed expert groups. The perceived advantages of passive systems are an increase in plant safety with a decrease in cost.by Ryan C. Eul.S.M
    corecore