80 research outputs found
Measuring the State of ECN Readiness in Servers, Clients, and Routers
Proceedings of the Eleventh ACM SIGCOMM/USENIX Internet Measurement Conference (IMC 2011), Berlin, DE, November 2011.Better exposing congestion can improve traffic management in the wide-area, at peering points, among residential broadband connections, and in the data center. TCP's network utilization and efficiency depends on congestion information, while recent research proposes economic and policy models based on congestion. Such motivations have driven widespread support of Explicit Congestion Notification (ECN) in modern operating systems. We reappraise the Internet's ECN readiness, updating and extending previous measurements. Across large and diverse server populations, we find a three-fold increase in ECN support over prior studies. Using new methods, we characterize ECN within mobile infrastructure and at the client-side, populations previously unmeasured. Via large-scale path measurements, we find the ECN feedback loop failing in the core of the network 40\% of the time, typically at AS boundaries. Finally, we discover new examples of infrastructure violating ECN Internet standards, and discuss remaining impediments to running ECN while suggesting mechanisms to aid adoption
Is Explicit Congestion Notification usable with UDP?
We present initial measurements to determine if ECN is usable with
UDP traffic in the public Internet. This is interesting because ECN
is part of current IETF proposals for congestion control of UDPbased
interactive multimedia, and due to the increasing use of UDP
as a substrate on which new transport protocols can be deployed.
Using measurements from the author’s homes, their workplace,
and cloud servers in each of the nine EC2 regions worldwide, we
test reachability of 2500 servers from the public NTP server pool,
using ECT(0) and not-ECT marked UDP packets. We show that
an average of 98.97% of the NTP servers that are reachable using
not-ECT marked packets are also reachable using ECT(0) marked
UDP packets, and that ~98% of network hops pass ECT(0) marked
packets without clearing the ECT bits. We compare reachability of
the same hosts using ECN with TCP, finding that 82.0% of those
reachable with TCP can successfully negotiate and use ECN. Our
findings suggest that ECN is broadly usable with UDP traffic, and
that support for use of ECN with TCP has increased
ECN with QUIC: Challenges in the Wild
TCP and QUIC can both leverage ECN to avoid congestion loss and its
retransmission overhead. However, both protocols require support of their
remote endpoints and it took two decades since the initial standardization of
ECN for TCP to reach 80% ECN support and more in the wild. In contrast, the
QUIC standard mandates ECN support, but there are notable ambiguities that make
it unclear if and how ECN can actually be used with QUIC on the Internet.
Hence, in this paper, we analyze ECN support with QUIC in the wild: We conduct
repeated measurements on more than 180M domains to identify HTTP/3 websites and
analyze the underlying QUIC connections w.r.t. ECN support. We only find 20% of
QUIC hosts, providing 6% of HTTP/3 websites, to mirror client ECN codepoints.
Yet, mirroring ECN is only half of what is required for ECN with QUIC, as QUIC
validates mirrored ECN codepoints to detect network impairments: We observe
that less than 2% of QUIC hosts, providing less than 0.3% of HTTP/3 websites,
pass this validation. We identify possible root causes in content providers not
supporting ECN via QUIC and network impairments hindering ECN. We thus also
characterize ECN with QUIC distributedly to traverse other paths and discuss
our results w.r.t. QUIC and ECN innovations beyond QUIC.Comment: Accepted at the ACM Internet Measurement Conference 2023 (IMC'23
MUST, SHOULD, DON'T CARE: TCP Conformance in the Wild
Standards govern the SHOULD and MUST requirements for protocol implementers
for interoperability. In case of TCP that carries the bulk of the Internets'
traffic, these requirements are defined in RFCs. While it is known that not all
optional features are implemented and nonconformance exists, one would assume
that TCP implementations at least conform to the minimum set of MUST
requirements. In this paper, we use Internet-wide scans to show how Internet
hosts and paths conform to these basic requirements. We uncover a
non-negligible set of hosts and paths that do not adhere to even basic
requirements. For example, we observe hosts that do not correctly handle
checksums and cases of middlebox interference for TCP options. We identify
hosts that drop packets when the urgent pointer is set or simply crash. Our
publicly available results highlight that conformance to even fundamental
protocol requirements should not be taken for granted but instead checked
regularly
Informing protocol design through crowdsourcing measurements
MenciĂłn Internacional en el tĂtulo de doctorMiddleboxes, such as proxies, firewalls and NATs play an important role in the modern Internet
ecosystem. On one hand, they perform advanced functions, e.g. traffic shaping, security or enhancing application
performance. On the other hand, they turn the Internet into a hostile ecosystem for innovation,
as they limit the deviation from deployed protocols. It is therefore essential, when designing a new protocol,
to first understand its interaction with the elements of the path. The emerging area of crowdsourcing
solutions can help to shed light on this issue. Such approach allows us to reach large and different sets of
users and also different types of devices and networks to perform Internet measurements. In this thesis,
we show how to make informed protocol design choices by expanding the traditional crowdsourcing focus
from the human element and using crowdsourcing large scale measurement platforms.
We consider specific use cases, namely the case of pervasive encryption in the modern Internet, TCP
Fast Open and ECN++. We consider such use cases to advance the global understanding on whether wide
adoption of encryption is possible in today’s Internet or the adoption of encryption is necessary to guarantee
the proper functioning of HTTP/2. We target ECN and particularly ECN++, given its succession of
deployment problems. We then measured ECN deployment over mobile as well as fixed networks. In the
process, we discovered some bad news for the base ECN protocol—more than half the mobile carriers we
tested wipe the ECN field at the first upstream hop. This thesis also reports the good news that, wherever
ECN gets through, we found no deployment problems for the ECN++ enhancement. The thesis includes
the results of other more in-depth tests to check whether servers that claim to support ECN, actually respond
correctly to explicit congestion feedback, including some surprising congestion behaviour unrelated
to ECN.
This thesis also explores the possible causes that ossify the modern Internet and make difficult the
advancement of the innovation. Network Address Translators (NATs) are a commonplace in the Internet
nowadays. It is fair to say that most of the residential and mobile users are connected to the Internet
through one or more NATs. As any other technology, NAT presents upsides and downsides. Probably the
most acknowledged downside of the NAT technology is that it introduces additional difficulties for some
applications such as peer-to-peer applications, gaming and others to function properly. This is partially
due to the nature of the NAT technology but also due to the diversity of behaviors of the different NAT implementations
deployed in the Internet. Understanding the properties of the currently deployed NAT base
provides useful input for application and protocol developers regarding what to expect when deploying
new application in the Internet. We develop NATwatcher, a tool to test NAT boxes using a crowdsourcingbased
measurement methodology.
We also perform large scale active measurement campaigns to detect CGNs in fixed broadband networks
using NAT Revelio, a tool we have developed and validated. Revelio enables us to actively determine from within residential networks the type of upstream network address translation, namely NAT
at the home gateway (customer-grade NAT) or NAT in the ISP (Carrier Grade NAT). We deploy Revelio
in the FCC Measuring Broadband America testbed operated by SamKnows and also in the RIPE Atlas
testbed.
A part of this thesis focuses on characterizing CGNs in Mobile Network Operators (MNOs). We develop
a measuring tool, called CGNWatcher that executes a number of active tests to fully characterize CGN
deployments in MNOs. The CGNWatcher tool systematically tests more than 30 behavioural requirements
of NATs defined by the Internet Engineering Task Force (IETF) and also multiple CGN behavioural metrics.
We deploy CGNWatcher in MONROE and performed large measurement campaigns to characterize the
real CGN deployments of the MNOs serving the MONROE nodes.
We perform a large measurement campaign using the tools described above, recruiting over 6,000 users,
from 65 different countries and over 280 ISPs. We validate our results with the ISPs at the IP level and,
reported to the ground truth we collected. To the best of our knowledge, this represents the largest active
measurement study of (confirmed) NAT or CGN deployments at the IP level in fixed and mobile networks
to date.
As part of the thesis, we characterize roaming across Europe. The goal of the experiment was to try to
understand if the MNO changes CGN while roaming, for this reason, we run a series of measurements that
enable us to identify the roaming setup, infer the network configuration for the 16 MNOs that we measure
and quantify the end-user performance for the roaming configurations which we detect. We build a unique
roaming measurement platform deployed in six countries across Europe. Using this platform, we measure
different aspects of international roaming in 3G and 4G networks, including mobile network configuration,
performance characteristics, and content discrimination. We find that operators adopt common approaches
to implementing roaming, resulting in additional latency penalties of 60 ms or more, depending on geographical
distance. Considering content accessibility, roaming poses additional constraints that leads to
only minimal deviations when accessing content in the original country. However, geographical restrictions
in the visited country make the picture more complicated and less intuitive.
Results included in this thesis would provide useful input for application, protocol designers, ISPs and
researchers that aim to make their applications and protocols to work across the modern Internet.Programa de Doctorado en IngenierĂa Telemática por la Universidad Carlos III de MadridPresidente: Gonzalo Camarillo González.- Secretario: MarĂa Carmen Guerrero LĂłpez.- Vocal: AndrĂ©s GarcĂa Saavedr
A middlebox-cooperative TCP for a non end-to-end Internet. In
ABSTRACT Understanding, measuring, and debugging IP networks, particularly across administrative domains, is challenging. One particularly daunting aspect of the challenge is the presence of transparent middleboxes-which are now common in today's Internet. In-path middleboxes that modify packet headers are typically transparent to a TCP, yet can impact end-to-end performance or cause blackholes. We develop TCP HICCUPS to reveal packet header manipulation to both endpoints of a TCP connection. HICCUPS permits endpoints to cooperate with currently opaque middleboxes without prior knowledge of their behavior. For example, with visibility into end-to-end behavior, a TCP can selectively enable or disable performance enhancing options. This cooperation enables protocol innovation by allowing new IP or TCP functionality (e.g., ECN, SACK, Multipath TCP, Tcpcrypt) to be deployed without fear of such functionality being misconstrued, modified, or blocked along a path. HICCUPS is incrementally deployable and introduces no new options. We implement and deploy TCP HICCUPS across thousands of disparate Internet paths, highlighting the breadth and scope of subtle and hard to detect middlebox behaviors encountered. We then show how path diagnostic capabilities provided by HICCUPS can benefit applications and the network
Migration to a New Internet Protocol in Operator Network
This thesis explains the differences between IPv4 and IPv6. Another important part of the thesis is to review the current readiness of IPv6 for worldwide production use. The status (in terms of readiness, adaptability, compatibility and co-existence) of IPv6 in TeliaSonera is discussed in more detail.
The most important reason for migrating to IPv6 is the address exhaustion of IPv4. This may not be a big problem in the developed countries but in developing countries the growth of Internet is fast and lots of more addresses are needed. The need for addresses is not only from computers but from many devices connected to the Internet.
Attempts to slow down the exhaustion of free addresses have been made but current solutions are not enough. IPv6 will solve the problem by using much longer addresses. It will also add security features and simplify headers to speed up routing.
TeliaSonera has started to roll out IPv6 services. At the beginning the corporate customers will receive IPv6 connectivity and consumers will follow later. TeliaSonera International Carrier is already serving its customers with IPv6.
It seems that IPv6 is ready, standards have been ready for years and support in devices and software is prevalent. To achieve and keep up the global connectivity, IPv6 is a must and should not be avoided
Internet QoS for DiffServ-Enabled Routers
Differentiated Service Model (DiffServ) is currently a popular research topic as a
low-cost method to bring QoS to today's Internet backbone network. In this paper,
the author introduces the techniques and methodologies that used to design and
implement DiffServ-enabled (DS-enabled) routers. The adaptations of DS-enabled
routers are designed to cater to the low Internet connectivity within Universiti
Teknologi PETRONAS LAN. The author has implemented basic DiffServ setting
using three CISC03725 routers. Based on these DiffServ-enabled routers, the author
set up a small scale lab network to study DiffServ QoS features: priority dropping
(discrimination among different service classes), QoS guarantees and measuring QoS
using various formal metrics (delay and throughput). Furthermore, the author present
problems encountered during study, and the proposed solutions
SURVEI MEKANISME CONGESTION KONTROL PADA TRANSMISSION CONTROL PROTOCOL DI SOFTWARE DEFINED NETWORK
Sebuah paradigma baru jaringan, Software Defined Network (SDN) dikembangkan membagi integrasi vertikal dalam perangkat jaringan, memisahkan logika kontrol dari infrastruktur jaringan sehingga memungkinkan untuk mengubah keadaan dan kondisi jaringan dari pengontrol yang dapat diprogram secara terpusat. sebagian besar komunikasi one-to-many pada SDN diimplementasikan melalui beberapa unicast seperti TCP, yang tidak efisien. Hal itu menghasilkan banyak trafik direplikasi, yang dapat berakibat menurunkan kinerja aplikasi karena beberapa permasalahan seperti congestion, redundancy dan collision. Permasalahan congestion terjadi ketika sebuah network mempunyai beban yang banyak dan mengakibatkan performansi menurun karena jumlah pengiriman melebihi kapasitas router yang ada. Salah satu solusi penanganan congesti dengan mereduksi ukuran TCP receive window. Tujuan utama dari pembuatan makalah ini adalah merangkum beberapa mekanisme kontrol kemacetan menggunakan TCP yang telah diakukan oleh para peneliti untuk menangani permasalahan kemacetan pada jaringan
- …