13 research outputs found

    Checking-in on Network Functions

    Full text link
    When programming network functions, changes within a packet tend to have consequences---side effects which must be accounted for by network programmers or administrators via arbitrary logic and an innate understanding of dependencies. Examples of this include updating checksums when a packet's contents has been modified or adjusting a payload length field of a IPv6 header if another header is added or updated within a packet. While static-typing captures interface specifications and how packet contents should behave, it does not enforce precise invariants around runtime dependencies like the examples above. Instead, during the design phase of network functions, programmers should be given an easier way to specify checks up front, all without having to account for and keep track of these consequences at each and every step during the development cycle. In keeping with this view, we present a unique approach for adding and generating both static checks and dynamic contracts for specifying and checking packet processing operations. We develop our technique within an existing framework called NetBricks and demonstrate how our approach simplifies and checks common dependent packet and header processing logic that other systems take for granted, all without adding much overhead during development.Comment: ANRW 2019 ~ https://irtf.org/anrw/2019/program.htm

    Is CoAP Congestion Safe?

    Get PDF
    A huge number of Internet of Things (IoT) devices are expected to be connected to the Internet in the near future. The Constrained Application Protocol (CoAP) has been increasingly deployed for wide-area IoT communication. It is crucial to understand how the specified CoAP congestion control algorithms perform. We seek an answer to this question by performing an extensive evaluation of the existing IETF CoAP Congestion Control proposals. We find that they fail to address congestion properly, particularly in the presence of a bufferbloated bottleneck buffer. We also fix the problem with a few simple modifications and demonstrate their effectiveness.Peer reviewe

    Measuring Web Speed From Passive Traces

    Get PDF
    Understanding the quality of Experience (QoE) of web brows- ing is key to optimize services and keep users’ loyalty. This is crucial for both Content Providers and Internet Service Providers (ISPs). Quality is subjective, and the complexity of today’s pages challenges its measurement. OnLoad time and SpeedIndex are notable attempts to quantify web performance with objective metrics. However, these metrics can only be computed by instrumenting the browser and, thus, are not available to ISPs. We designed PAIN: PAssive INdicator for ISPs. It is an automatic system to monitor the performance of web pages from passive measurements. It is open source and available for download. It leverages only flow-level and DNS measurements which are still possible in the network despite the deployment of HTTPS. With unsupervised learn- ing, PAIN automatically creates a machine learning model from the timeline of requests issued by browsers to render web pages, and uses it to measure web performance in real- time. We compared PAIN to indicators based on in-browser instrumentation and found strong correlations between the approaches. PAIN correctly highlights worsening network conditions and provides visibility into web performance. We let PAIN run on a real ISP network, and found that it is able to pinpoint performance variations across time and groups of users

    ECN with QUIC: Challenges in the Wild

    Full text link
    TCP and QUIC can both leverage ECN to avoid congestion loss and its retransmission overhead. However, both protocols require support of their remote endpoints and it took two decades since the initial standardization of ECN for TCP to reach 80% ECN support and more in the wild. In contrast, the QUIC standard mandates ECN support, but there are notable ambiguities that make it unclear if and how ECN can actually be used with QUIC on the Internet. Hence, in this paper, we analyze ECN support with QUIC in the wild: We conduct repeated measurements on more than 180M domains to identify HTTP/3 websites and analyze the underlying QUIC connections w.r.t. ECN support. We only find 20% of QUIC hosts, providing 6% of HTTP/3 websites, to mirror client ECN codepoints. Yet, mirroring ECN is only half of what is required for ECN with QUIC, as QUIC validates mirrored ECN codepoints to detect network impairments: We observe that less than 2% of QUIC hosts, providing less than 0.3% of HTTP/3 websites, pass this validation. We identify possible root causes in content providers not supporting ECN via QUIC and network impairments hindering ECN. We thus also characterize ECN with QUIC distributedly to traverse other paths and discuss our results w.r.t. QUIC and ECN innovations beyond QUIC.Comment: Accepted at the ACM Internet Measurement Conference 2023 (IMC'23

    Implementation and Evaluation of Activity-Based Congestion Management Using P4 (P4-ABC)

    Get PDF
    Activity-Based Congestion management (ABC) is a novel domain-based QoS mechanism providing more fairness among customers on bottleneck links. It avoids per-flow or per-customer states in the core network and is suitable for application in future 5G networks. However, ABC cannot be configured on standard devices. P4 is a novel programmable data plane specification which allows defining new headers and forwarding behavior. In this work, we implement an ABC prototype using P4 and point out challenges experienced during implementation. Experimental validation of ABC using the P4-based prototype reveals the desired fairness results

    Transport Layer solution for bulk data transfers over Heterogeneous Long Fat Networks in Next Generation Networks

    Get PDF
    Aquesta tesi per compendi centra les seves contribucions en l'aprenentatge i innovació de les Xarxes de Nova Generació. És per això que es proposen diferents contribucions en diferents àmbits (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Mitjana, eHealth, Indústria 4.0 entre d'altres) mitjançant l'aplicació i combinació de diferents disciplines (Internet of Things, Building Information Modeling, Cloud Storage, Ciberseguretat, Big Data, Internet de el Futur, Transformació Digital). Concretament, es detalla el monitoratge sostenible del confort a l'Smart Campus, la que potser es la meva aportació més representativa dins de la conceptualització de Xarxes de Nova Generació. Dins d'aquest innovador concepte de monitorització s'integren diferents disciplines, per poder oferir informació sobre el nivell de confort de les persones. Aquesta investigació demostra el llarg recorregut que hi ha en la transformació digital dels sectors tradicionals i les NGNs. Durant aquest llarg aprenentatge sobre les NGN a través de les diferents investigacions, es va poder observar una problemàtica que afectava de manera transversal als diferents camps d'aplicació de les NGNs i que aquesta podia tenir una afectació en aquests sectors. Aquesta problemàtica consisteix en el baix rendiment durant l'intercanvi de grans volums de dades sobre xarxes amb gran capacitat d'ample de banda i remotament separades geogràficament, conegudes com a xarxes elefant. Concretament, això afecta al cas d'ús d'intercanvi massiu de dades entre regions Cloud (Cloud Data Sharing use case). És per això que es va estudiar aquest cas d'ús i les diferents alternatives a nivell de protocols de transport,. S'estudien les diferents problemàtiques que pateixen els protocols i s'observa per què aquests no són capaços d'arribar a rendiments òptims. Deguda a aquesta situació, s'hipotetiza que la introducció de mecanismes que analitzen les mètriques de la xarxa i que exploten eficientment la capacitat de la mateixa milloren el rendiment dels protocols de transport sobre xarxes elefant heterogènies durant l'enviament massiu de dades. Primerament, es dissenya l’Adaptative and Aggressive Transport Protocol (AATP), un protocol de transport adaptatiu i eficient amb l'objectiu de millorar el rendiment sobre aquest tipus de xarxes elefant. El protocol AATP s'implementa i es prova en un simulador de xarxes i un testbed sota diferents situacions i condicions per la seva validació. Implementat i provat amb èxit el protocol AATP, es decideix millorar el propi protocol, Enhanced-AATP, sobre xarxes elefant heterogènies. Per això, es dissenya un mecanisme basat en el Jitter Ràtio que permet fer aquesta diferenciació. A més, per tal de millorar el comportament del protocol, s’adapta el seu sistema de fairness per al repartiment just dels recursos amb altres fluxos Enhanced-AATP. Aquesta evolució s'implementa en el simulador de xarxes i es realitzen una sèrie de proves. A l'acabar aquesta tesi, es conclou que les Xarxes de Nova Generació tenen molt recorregut i moltes coses a millorar causa de la transformació digital de la societat i de l'aparició de nova tecnologia disruptiva. A més, es confirma que la introducció de mecanismes específics en la concepció i operació dels protocols de transport millora el rendiment d'aquests sobre xarxes elefant heterogènies.Esta tesis por compendio centra sus contribuciones en el aprendizaje e innovación de las Redes de Nueva Generación. Es por ello que se proponen distintas contribuciones en diferentes ámbitos (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Media, eHealth, Industria 4.0 entre otros) mediante la aplicación y combinación de diferentes disciplinas (Internet of Things, Building Information Modeling, Cloud Storage, Ciberseguridad, Big Data, Internet del Futuro, Transformación Digital). Concretamente, se detalla la monitorización sostenible del confort en el Smart Campus, la que se podría considerar mi aportación más representativa dentro de la conceptualización de Redes de Nueva Generación. Dentro de este innovador concepto de monitorización se integran diferentes disciplinas, para poder ofrecer información sobre el nivel de confort de las personas. Esta investigación demuestra el recorrido que existe en la transformación digital de los sectores tradicionales y las NGNs. Durante este largo aprendizaje sobre las NGN a través de las diferentes investigaciones, se pudo observar una problemática que afectaba de manera transversal a los diferentes campos de aplicación de las NGNs y que ésta podía tener una afectación en estos sectores. Esta problemática consiste en el bajo rendimiento durante el intercambio de grandes volúmenes de datos sobre redes con gran capacidad de ancho de banda y remotamente separadas geográficamente, conocidas como redes elefante, o Long Fat Networks (LFNs). Concretamente, esto afecta al caso de uso de intercambio de datos entre regiones Cloud (Cloud Data Data use case). Es por ello que se estudió este caso de uso y las diferentes alternativas a nivel de protocolos de transporte. Se estudian las diferentes problemáticas que sufren los protocolos y se observa por qué no son capaces de alcanzar rendimientos óptimos. Debida a esta situación, se hipotetiza que la introducción de mecanismos que analizan las métricas de la red y que explotan eficientemente la capacidad de la misma mejoran el rendimiento de los protocolos de transporte sobre redes elefante heterogéneas durante el envío masivo de datos. Primeramente, se diseña el Adaptative and Aggressive Transport Protocol (AATP), un protocolo de transporte adaptativo y eficiente con el objetivo maximizar el rendimiento sobre este tipo de redes elefante. El protocolo AATP se implementa y se prueba en un simulador de redes y un testbed bajo diferentes situaciones y condiciones para su validación. Implementado y probado con éxito el protocolo AATP, se decide mejorar el propio protocolo, Enhanced-AATP, sobre redes elefante heterogéneas. Además, con tal de mejorar el comportamiento del protocolo, se mejora su sistema de fairness para el reparto justo de los recursos con otros flujos Enhanced-AATP. Esta evolución se implementa en el simulador de redes y se realizan una serie de pruebas. Al finalizar esta tesis, se concluye que las Redes de Nueva Generación tienen mucho recorrido y muchas cosas a mejorar debido a la transformación digital de la sociedad y a la aparición de nueva tecnología disruptiva. Se confirma que la introducción de mecanismos específicos en la concepción y operación de los protocolos de transporte mejora el rendimiento de estos sobre redes elefante heterogéneas.This compendium thesis focuses its contributions on the learning and innovation of the New Generation Networks. That is why different contributions are proposed in different areas (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Media, eHealth, Industry 4.0, among others) through the application and combination of different disciplines (Internet of Things, Building Information Modeling, Cloud Storage, Cybersecurity, Big Data, Future Internet, Digital Transformation). Specifically, the sustainable comfort monitoring in the Smart Campus is detailed, which can be considered my most representative contribution within the conceptualization of New Generation Networks. Within this innovative monitoring concept, different disciplines are integrated in order to offer information on people's comfort levels. . This research demonstrates the long journey that exists in the digital transformation of traditional sectors and New Generation Networks. During this long learning about the NGNs through the different investigations, it was possible to observe a problematic that affected the different application fields of the NGNs in a transversal way and that, depending on the service and its requirements, it could have a critical impact on any of these sectors. This issue consists of a low performance operation during the exchange of large volumes of data on networks with high bandwidth capacity and remotely geographically separated, also known as Elephant networks, or Long Fat Networks (LFNs). Specifically, this critically affects the Cloud Data Sharing use case. That is why this use case and the different alternatives at the transport protocol level were studied. For this reason, the performance and operation problems suffered by layer 4 protocols are studied and it is observed why these traditional protocols are not capable of achieving optimal performance. Due to this situation, it is hypothesized that the introduction of mechanisms that analyze network metrics and efficiently exploit network’s capacity meliorates the performance of Transport Layer protocols over Heterogeneous Long Fat Networks during bulk data transfers. First, the Adaptive and Aggressive Transport Protocol (AATP) is designed. An adaptive and efficient transport protocol with the aim of maximizing its performance over this type of elephant network.. The AATP protocol is implemented and tested in a network simulator and a testbed under different situations and conditions for its validation. Once the AATP protocol was designed, implemented and tested successfully, it was decided to improve the protocol itself, Enhanced-AATP, to improve its performance over heterogeneous elephant networks. In addition, in order to upgrade the behavior of the protocol, its fairness system is improved for the fair distribution of resources among other Enhanced-AATP flows. Finally, this evolution is implemented in the network simulator and a set of tests are carried out. At the end of this thesis, it is concluded that the New Generation Networks have a long way to go and many things to improve due to the digital transformation of society and the appearance of brand-new disruptive technology. Furthermore, it is confirmed that the introduction of specific mechanisms in the conception and operation of transport protocols improves their performance on Heterogeneous Long Fat Networks

    Last-Mile TLS Interception: Analysis and Observation of the Non-Public HTTPS Ecosystem

    Get PDF
    Transport Layer Security (TLS) is one of the most widely deployed cryptographic protocols on the Internet that provides confidentiality, integrity, and a certain degree of authenticity of the communications between clients and servers. Following Snowden's revelations on US surveillance programs, the adoption of TLS has steadily increased. However, encrypted traffic prevents legitimate inspection. Therefore, security solutions such as personal antiviruses and enterprise firewalls may intercept encrypted connections in search for malicious or unauthorized content. Therefore, the end-to-end property of TLS is broken by these TLS proxies (a.k.a. middleboxes) for arguably laudable reasons; yet, may pose a security risk. While TLS clients and servers have been analyzed to some extent, such proxies have remained unexplored until recently. We propose a framework for analyzing client-end TLS proxies, and apply it to 14 consumer antivirus and parental control applications as they break end-to-end TLS connections. Overall, the security of TLS connections was systematically worsened compared to the guarantees provided by modern browsers. Next, we aim at exploring the non-public HTTPS ecosystem, composed of locally-trusted proxy-issued certificates, from the user's perspective and from several countries in residential and enterprise settings. We focus our analysis on the long tail of interception events. We characterize the customers of network appliances, ranging from small/medium businesses and institutes to hospitals, hotels, resorts, insurance companies, and government agencies. We also discover regional cases of traffic interception malware/adware that mostly rely on the same Software Development Kit (i.e., NetFilter). Our scanning and analysis techniques allow us to identify more middleboxes and intercepting apps than previously found from privileged server vantages looking at billions of connections. We further perform a longitudinal study over six years of the evolution of a prominent traffic-intercepting adware found in our dataset: Wajam. We expose the TLS interception techniques it has used and the weaknesses it has introduced on hundreds of millions of user devices. This study also (re)opens the neglected problem of privacy-invasive adware, by showing how adware evolves sometimes stronger than even advanced malware and poses significant detection and reverse-engineering challenges. Overall, whether beneficial or not, TLS interception often has detrimental impacts on security without the end-user being alerted

    Attacking and securing Network Time Protocol

    Get PDF
    Network Time Protocol (NTP) is used to synchronize time between computer systems communicating over unreliable, variable-latency, and untrusted network paths. Time is critical for many applications; in particular it is heavily utilized by cryptographic protocols. Despite its importance, the community still lacks visibility into the robustness of the NTP ecosystem itself, the integrity of the timing information transmitted by NTP, and the impact that any error in NTP might have upon the security of other protocols that rely on timing information. In this thesis, we seek to accomplish the following broad goals: 1. Demonstrate that the current design presents a security risk, by showing that network attackers can exploit NTP and then use it to attack other core Internet protocols that rely on time. 2. Improve NTP to make it more robust, and rigorously analyze the security of the improved protocol. 3. Establish formal and precise security requirements that should be satisfied by a network time-synchronization protocol, and prove that these are sufficient for the security of other protocols that rely on time. We take the following approach to achieve our goals incrementally. 1. We begin by (a) scrutinizing NTP's core protocol (RFC 5905) and (b) statically analyzing code of its reference implementation to identify vulnerabilities in protocol design, ambiguities in specifications, and flaws in reference implementations. We then leverage these observations to show several off- and on-path denial-of-service and time-shifting attacks on NTP clients. We then show cache-flushing and cache-sticking attacks on DNS(SEC) that leverage NTP. We quantify the attack surface using Internet measurements, and suggest simple countermeasures that can improve the security of NTP and DNS(SEC). 2. Next we move beyond identifying attacks and leverage ideas from Universal Composability (UC) security framework to develop a cryptographic model for attacks on NTP's datagram protocol. We use this model to prove the security of a new backwards-compatible protocol that correctly synchronizes time in the face of both off- and on-path network attackers. 3. Next, we propose general security notions for network time-synchronization protocols within the UC framework and formulate ideal functionalities that capture a number of prevalent forms of time measurement within existing systems. We show how they can be realized by real-world protocols (including but not limited to NTP), and how they can be used to assert security of time-reliant applications-specifically, cryptographic certificates with revocation and expiration times. Our security framework allows for a clear and modular treatment of the use of time in security-sensitive systems. Our work makes the core NTP protocol and its implementations more robust and secure, thus improving the security of applications and protocols that rely on time

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms
    corecore