11 research outputs found
Measuring ECN++: good news for ++, bad news for ECN over mobile
After ECN was first added to IP in 2001, it was hit by a succession of deployment problems. Studies in recent years have concluded that path traversal of ECN has become close to universal. In this article, we test whether the performance enhancement called ECN++ will face a similar deployment struggle as did base ECN. For this, we assess the feasibility of ECN++ deployment over mobile as well as fixed networks. In the process, we discover bad news for the base ECN protocol: contrary to accepted beliefs, more than half the mobile carriers we tested wipe the ECN field at the first upstream hop. All packets still get through, and congestion control still functions, just without the benefits of ECN. This throws into question whether previous studies used representative vantage points. This article also reports the good news that, wherever ECN gets through, we found no deployment problems for the "++" enhancement to ECN. The article includes the results of other in-depth tests that check whether servers that claim to support ECN actually respond correctly to explicit congestion feedback. Those interested can access the raw measurement data online.The work of Anna Maria Mandalari has been funded by the EU FP7 METRICS (607728) project. The work of Marcelo Bagnulo has been performed in the framework of the H2020-ICT-2014-2 project 5G NORMA and the 5G-City project funded by MINECO. This work was partially supported by the EU H2020 research and innovation program under grant agreement No. 644399 (MONROE) and grant agreement No. 688421 (MAMI)
TCP/IP stack fingerprinting for patch detection in a distributed Windows environment
Patch Management has become important in every system administrator\u27s work profile. A missing patch can be essentially considered a vulnerability as the hackers make use of the knowledge of the vulnerability from the security bulletin and attempt attacks for that vulnerability. An efficient patch management solution is necessary to counter known vulnerabilities. For this an inventory listing of the patches installed in each system called a patch audit helps the system administrators know the patch status and install only the necessary patches. An important problem in patch auditing is that there may be many systems in a network for which the administrator does not have administrative privileges and hence cannot find the patch status. Current patch management tools do not address this problem.;This thesis investigates the possibility of finding patterns for missing patches by using TCP/IP Stack Fingerprinting. Malformed TCP packets are sent to the target system and the TCP and IP headers of the response from it are analyzed to find out specific patterns for a missing patch.;Windows based systems are the primary target since they typically constitute a majority of the systems in a network. They are as well, considered to be the most vulnerable. This investigation limits itself to classifying DCOM RPC Buffer overflow vulnerabilities on Windows based systems
近時のアメリカ合衆国における情報サービス規制をめぐる議論について ―ケーブル事業者である Comcast Corporationによるエンド・ユーザーのP2Pトラフィック/通信量の遮断が提起する問題に対するFCCの判断を中心に―
On August20, 2008, FCC made a landmark decision to order Comcast Corporation to end its prior discriminatory network management practices, and affirmed its authority to protect the Internet under Title I of the Communications Act of1934. In this order, FCC states that it has discretion to choose between adjudication and rulemaking,and can exercise its ancillary jurisdic-tion over a broadband Internet access service provider's unreasonable network management practices, even though it is not a common carrier under Title II of the Act. However, the issue of what constitutes reasonable network management remains to be solved. Government author-ities should make the additional framework that is necessary to preserve the vibrant and open architecture of the Internet, and foster its progress in the future
Policy based network management of legacy network elements in next generation networks for voice services
Magister Scientiae - MScTelecommunication companies, service providers and large companies are now
adapting converged multi-service Next Generation Networks (NGNs). Network
management is shifting from managing Network Elements (NE) to managing services. This paradigm shift coincides with the rapid development of Quality of Service (QoS) protocols for IP networks. NEs and services are managed with Policy Based Network Management (PBNM) which is most concerned with managing services that require QoS using the Common Open Policy Service (COPS) Protocol. These services include Voice over IP (VoIP), video conferencing and video streaming. It follows that legacy NEs without support for QoS need to be replaced and/or excluded from the network. However, since most of these services run over IP, and legacy NEs easily supports IP,
it may be unnecessary to throw away legacy NEs if it can be made to fit within a PBNM approach. Our approach enables an existing PBNM system to include legacy NEs in its management paradigm. The Proxy-Policy Enforcement Point (P-PEP) and Queuing Policy Enforcement Point (Q-PEP) can enforce some degree of traffic shaping on a gateway to the legacy portion of the network. The P-PEP utilises firewall techniques using the common legacy and contemporary NE management protocol Simple Network Management Protocol (SNMP) while the Q-PEP uses queuing techniques in the form Class Based Queuing (CBQ) and Random Early Discard (RED) for traffic control.South Afric
Informing protocol design through crowdsourcing measurements
Mención Internacional en el título de doctorMiddleboxes, such as proxies, firewalls and NATs play an important role in the modern Internet
ecosystem. On one hand, they perform advanced functions, e.g. traffic shaping, security or enhancing application
performance. On the other hand, they turn the Internet into a hostile ecosystem for innovation,
as they limit the deviation from deployed protocols. It is therefore essential, when designing a new protocol,
to first understand its interaction with the elements of the path. The emerging area of crowdsourcing
solutions can help to shed light on this issue. Such approach allows us to reach large and different sets of
users and also different types of devices and networks to perform Internet measurements. In this thesis,
we show how to make informed protocol design choices by expanding the traditional crowdsourcing focus
from the human element and using crowdsourcing large scale measurement platforms.
We consider specific use cases, namely the case of pervasive encryption in the modern Internet, TCP
Fast Open and ECN++. We consider such use cases to advance the global understanding on whether wide
adoption of encryption is possible in today’s Internet or the adoption of encryption is necessary to guarantee
the proper functioning of HTTP/2. We target ECN and particularly ECN++, given its succession of
deployment problems. We then measured ECN deployment over mobile as well as fixed networks. In the
process, we discovered some bad news for the base ECN protocol—more than half the mobile carriers we
tested wipe the ECN field at the first upstream hop. This thesis also reports the good news that, wherever
ECN gets through, we found no deployment problems for the ECN++ enhancement. The thesis includes
the results of other more in-depth tests to check whether servers that claim to support ECN, actually respond
correctly to explicit congestion feedback, including some surprising congestion behaviour unrelated
to ECN.
This thesis also explores the possible causes that ossify the modern Internet and make difficult the
advancement of the innovation. Network Address Translators (NATs) are a commonplace in the Internet
nowadays. It is fair to say that most of the residential and mobile users are connected to the Internet
through one or more NATs. As any other technology, NAT presents upsides and downsides. Probably the
most acknowledged downside of the NAT technology is that it introduces additional difficulties for some
applications such as peer-to-peer applications, gaming and others to function properly. This is partially
due to the nature of the NAT technology but also due to the diversity of behaviors of the different NAT implementations
deployed in the Internet. Understanding the properties of the currently deployed NAT base
provides useful input for application and protocol developers regarding what to expect when deploying
new application in the Internet. We develop NATwatcher, a tool to test NAT boxes using a crowdsourcingbased
measurement methodology.
We also perform large scale active measurement campaigns to detect CGNs in fixed broadband networks
using NAT Revelio, a tool we have developed and validated. Revelio enables us to actively determine from within residential networks the type of upstream network address translation, namely NAT
at the home gateway (customer-grade NAT) or NAT in the ISP (Carrier Grade NAT). We deploy Revelio
in the FCC Measuring Broadband America testbed operated by SamKnows and also in the RIPE Atlas
testbed.
A part of this thesis focuses on characterizing CGNs in Mobile Network Operators (MNOs). We develop
a measuring tool, called CGNWatcher that executes a number of active tests to fully characterize CGN
deployments in MNOs. The CGNWatcher tool systematically tests more than 30 behavioural requirements
of NATs defined by the Internet Engineering Task Force (IETF) and also multiple CGN behavioural metrics.
We deploy CGNWatcher in MONROE and performed large measurement campaigns to characterize the
real CGN deployments of the MNOs serving the MONROE nodes.
We perform a large measurement campaign using the tools described above, recruiting over 6,000 users,
from 65 different countries and over 280 ISPs. We validate our results with the ISPs at the IP level and,
reported to the ground truth we collected. To the best of our knowledge, this represents the largest active
measurement study of (confirmed) NAT or CGN deployments at the IP level in fixed and mobile networks
to date.
As part of the thesis, we characterize roaming across Europe. The goal of the experiment was to try to
understand if the MNO changes CGN while roaming, for this reason, we run a series of measurements that
enable us to identify the roaming setup, infer the network configuration for the 16 MNOs that we measure
and quantify the end-user performance for the roaming configurations which we detect. We build a unique
roaming measurement platform deployed in six countries across Europe. Using this platform, we measure
different aspects of international roaming in 3G and 4G networks, including mobile network configuration,
performance characteristics, and content discrimination. We find that operators adopt common approaches
to implementing roaming, resulting in additional latency penalties of 60 ms or more, depending on geographical
distance. Considering content accessibility, roaming poses additional constraints that leads to
only minimal deviations when accessing content in the original country. However, geographical restrictions
in the visited country make the picture more complicated and less intuitive.
Results included in this thesis would provide useful input for application, protocol designers, ISPs and
researchers that aim to make their applications and protocols to work across the modern Internet.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Gonzalo Camarillo González.- Secretario: María Carmen Guerrero López.- Vocal: Andrés García Saavedr
TCP Connection Management Mechanisms for Improving Internet Server Performance
This thesis investigates TCP connection management mechanisms in order to understand the behaviour and improve the performance of Internet servers during overload conditions such as flash crowds. We study several alternatives for implementing TCP connection establishment, reviewing approaches taken by existing TCP stacks as well as proposing new mechanisms to improve server throughput and reduce client response times under overload. We implement some of these connection establishment mechanisms in the Linux TCP stack and evaluate their performance in a variety of environments. We also evaluate the cost of supporting half-closed connections at the server and assess the impact of an abortive release of connections by clients on the throughput of an overloaded server. Our evaluation demonstrates that connection establishment mechanisms that eliminate the TCP-level retransmission of connection attempts by clients increase server throughput by up to 40% and reduce client response times by two orders of magnitude. Connection termination mechanisms that preclude support for half-closed connections additionally improve server throughput by up to 18%