12 research outputs found
Transport congestion events detection (TCED): towards decorrelating congestion detection from TCP
TCP (Transmission Control Protocol) uses a loss-based algorithm to estimate whether the network is congested or not.
The main difficulty for this algorithm is to distinguish spurious from real network congestion events. Other research studies have proposed to enhance the reliability of this congestion estimation by modifying the internal TCP algorithm.
In this paper, we propose an original congestion event algorithm implemented independently of the TCP source code. Basically, we propose a modular architecture to implement a congestion event detection algorithm to cope with the increasing complexity of the TCP code and we use it to understand why some spurious congestion events might not be
detected in some complex cases. We show that our proposal is able to increase the reliability of TCP NewReno congestion detection algorithm that might help to the design of detection criterion independent of the TCP code. We find out that solutions based only on RTT (Round-Trip Time) estimation are not accurate enough to cover all existing cases.
Furthermore, we evaluate our algorithm with and without network reordering where other inaccuracies, not previously
identified, occur
A comparative study of aggregate TCP retransmission rates
Segment retransmissions are an essential tool in assuring reliable end-to-end
communication in the Internet. Their crucial role in TCP design and operation
has been studied extensively, in particular with respect to identifying
non-conformant, buggy, or underperforming behaviour. However, TCP segment
retransmissions are often overlooked when examining and analyzing large traffic
traces. In fact, some have come to believe that retransmissions are a rare
oddity, characteristically associated with faulty network paths, which,
typically, tend to disappear as networking technology advances and link
capacities grow. We find that this may be far from the reality experienced by
TCP flows. We quantify aggregate TCP segment retransmission rates using
publicly available network traces from six passive monitoring points attached
to the egress gateways at large sites. In virtually half of the traces examined
we observed aggregate TCP retransmission rates exceeding 1%, and of these,
about half again had retransmission rates exceeding 2%. Even for sites with low
utilization and high capacity gateway links, retransmission rates of 1%, and
sometimes higher, were not uncommon. Our results complement, extend and bring
up to date partial and incomplete results in previous work, and show that TCP
retransmissions continue to constitute a non-negligible percentage of the
overall traffic, despite significant advances across the board in
telecommunications technologies and network protocols. The results presented
are pertinent to end-to-end protocol designers and evaluators as they provide a
range of "realistic" scenarios under which, and a "marker" against which,
simulation studies can be configured and calibrated, and future protocols
evaluated
Détection des événements de congestion de TCP
Ces dernières années ont vu un intérêt croissant pour l'étude de la mesure des flots TCP. Plusieurs méthodes ont été proposées afin d'estimer de façon précise et rapide le taux de perte des paquets. Toutefois, et à notre meilleure connaissance, l'estimation et l'identification des événements de congestion TCP ne sont actuellement pas adressées. Suite à la standardisation de TFRC (TCP Friendly Rate Control), nous assistons à un intérêt croissant pour les protocoles de transport utilisant un contrôle de congestion de type rate-based. Aussi, ce type de contrôle de congestion repose sur la même base métrologique que celle de TCP. Dans ce contexte, la détermination précise des événements de congestion (CE) est un élément clé. En effet, ces derniers donnent une information essentielle pour le calcul d'un débit d'émission qui soit équivalent à celui de TCP dans les mêmes conditions. L'objet de cet article est de mieux identifier les CE de TCP afin de fournir une référence précise à ces nouveaux protocoles. Nous vérifions dans cette étude que TCP n'identifie pas de manière efficace les CE du réseau et proposons une méthode capable de mieux les déterminer. Cette détection est effectuée passivement à partir de la capture temps réel des paquets d'un flot TCP
SPAD: a distributed middleware architecture for QoS enhanced alternate path discovery
In the next generation Internet, the network will evolve from a plain communication medium into one that provides endless services to the users. These services will be composed of multiple cooperative distributed application elements. We name these services overlay applications. The cooperative application elements within an overlay application will build a dynamic communication mesh, namely an overlay association. The Quality of Service (QoS) perceived by the users of an overlay application greatly depends on the QoS experienced on the communication paths of the corresponding overlay association. In this paper, we present SPAD (Super-Peer Alternate path Discovery), a distributed middleware architecture that aims at providing enhanced QoS between end-points within an overlay association. To achieve this goal, SPAD provides a complete scheme to discover and utilize composite alternate end-to end paths with better QoS than the path given by the default IP routing mechanisms
Flow level detection and filtering of low-rate DDoS
The recently proposed TCP-targeted Low-rate Distributed Denial-of-Service (LDDoS) attacks send fewer packets to attack legitimate flows by exploiting the vulnerability in TCP’s congestion control mechanism. They are difficult to detect while causing severe damage to TCP-based applications. Existing approaches can only detect the presence of an LDDoS attack, but fail to identify LDDoS flows. In this paper, we propose a novel metric – Congestion Participation Rate (CPR) – and a CPR-based approach to detect and filter LDDoS attacks by their intention to congest the network. The major innovation of the CPR-base approach is its ability to identify LDDoS flows. A flow with a CPR higher than a predefined threshold is classified as an LDDoS flow, and consequently all of its packets will be dropped. We analyze the effectiveness of CPR theoretically by quantifying the average CPR difference between normal TCP flows and LDDoS flows and showing that CPR can differentiate them. We conduct ns-2 simulations, test-bed experiments, and Internet traffic trace analysis to validate our analytical results and evaluate the performance of the proposed approach. Experimental results demonstrate that the proposed CPR-based approach is substantially more effective compared to an existing Discrete Fourier Transform (DFT)-based approach – one of the most efficient approaches in detecting LDDoS attacks. We also provide experimental guidance to choose the CPR threshold in practice
Adapting End Host Congestion Control for Mobility
Network layer mobility allows transport protocols to maintain connection state, despite changes in a node's physical location and point of network connectivity. However, some congestion-controlled transport protocols are not designed to deal with these rapid and potentially significant path changes. In this paper we demonstrate several distinct problems that mobility-induced path changes can create for TCP performance. Our premise is that mobility events indicate path changes that require re-initialization of congestion control state at both connection end points. We present the application of this idea to TCP in the form of a simple solution (the Lightweight Mobility Detection and Response algorithm, that has been proposed in the IETF), and examine its effectiveness. In general, we find that the deficiencies presented are both relatively easily and painlessly fixed using this solution. We also find that this solution has the counter-intuitive property of being both more friendly to competing traffic, and simultaneously more aggressive in utilizing newly available capacity than unmodified TCP
Distributed discovery and management of alternate paths with enhanced quality of service in the internet
La convergence de récentes avancées technologiques permet l'émergence de nouveaux environnements informatiques pervasifs, dans lesquels des terminaux en réseaux coopèrent et communiquent de manière transparente pour les utilisateurs. Ces utilisateurs demandent des fonctionalités de plus en plus avancées de la part de ces terminaux. Etant données les limites intrinsèques des terminaux mobiles, ces fonctionalités, au lieu d'être directement implémentées dans les terminaux, sont appelées à être fournies par des fournisseurs de services situés à la périphérie du réseau. Ce derniers devient alors une source illimitée de services, et non plus seulement un medium de communication. Ces services, ou applications d'overlays, sont formés de plusieurs éléments applicatifs distribués qui coopèrent et communiquent entre eux via un réseau de recouvrement dynamique particulier, une association d'overlay. La Qualité de Service (QdS) perçue par les utilisateurs d'une application d'overlay dépend de la QdS existant au niveau des chemins de communications qui forment l'association d'overlay correspondante. Cette thèse montre qu'il est possible de fournir de la QdS à une application d'overlay en utilisant des chemins Internet alternatifs, résultant de la composition de chemins distincts. De plus, cette thèse montre également qu'il est possible de découvrir, sélectionner, et composer d'une manière distribuée ces chemins élémentaires, au sein d'une communauté comprenant un nombre important d'entités paires (telles que les précédents fournisseurs de services). Les principales contributions de cette thèse sont : i) une description et une analyse des caractéristiques de QdS de ces chemins alternatifs composés, ii) une architecture originale appelée SPAD (Super-Peer based Alternate path Discovery), qui permet la découverte et la sélection de manière distribuée de ces chemins alternatifs. SPAD est un système complètement décentralisé, qui peut être facilement et incrémentalement déployé sur l'Internet actuel. Il permet aux utilisateurs situés à la périphérie du réseau de découvrir et d'utiliser directement des chemins alternatifs. ABSTRACT : The convergence of recent technology advances opens the way to new ubiquitous environments, where network-enabled devices collectively form invisible pervasive computing and networking environments around the users. These users increasingly require extensive applications and capabilities from these devices. Recent approaches propose that cooperating service providers, at the edge of the network, offer these required capabilities (i.e services), instead of having them directly provided by the devices. Thus, the network evolves from a plain communication medium into an endless source of services. Such a service, namely an overlay application, is composed of multiple distributed application elements, which cooperate via a dynamic communication mesh, namely an overlay association. The Quality of Service (QoS) perceived by the users of an overlay application greatly depends on the QoS on the communication paths of the corresponding overlay association. This thesis asserts and shows that it is possible to provide QoS to an overlay application by using alternate Internet paths resulting from the compositions of independent consecutive paths. Moreover, this thesis also demonstrates that it is possible to discover, select and compose these independent paths in a distributed manner within an community comprising a limited large number of autonomous cooperating peers, such as the fore-mentioned service providers. Thus, the main contributions of this thesis are i) a comprehensive description and QoS characteristic analysis of these composite alternate paths, and ii) an original architecture, termed SPAD (Super-Peer based Alternate path Discovery), which allows the discovery and selection of these alternate paths in a distributed manner. SPAD is a fully distributed system with no single point of failure, which can be easily and incrementally deployed on the current Internet. It empowers the end-users at the edge of the network, allowing them to directly discover and utilize alternate paths
Algorithms for Large-Scale Internet Measurements
As the Internet has grown in size and importance to society, it has become
increasingly difficult to generate global metrics of interest that can be used to verify
proposed algorithms or monitor performance. This dissertation tackles the problem
by proposing several novel algorithms designed to perform Internet-wide measurements
using existing or inexpensive resources.
We initially address distance estimation in the Internet, which is used by many
distributed applications. We propose a new end-to-end measurement framework
called Turbo King (T-King) that uses the existing DNS infrastructure and, when
compared to its predecessor King, obtains delay samples without bias in the presence
of distant authoritative servers and forwarders, consumes half the bandwidth, and
reduces the impact on caches at remote servers by several orders of magnitude.
Motivated by recent interest in the literature and our need to find remote DNS
nameservers, we next address Internet-wide service discovery by developing IRLscanner,
whose main design objectives have been to maximize politeness at remote networks,
allow scanning rates that achieve coverage of the Internet in minutes/hours
(rather than weeks/months), and significantly reduce administrator complaints. Using
IRLscanner and 24-hour scan durations, we perform 20 Internet-wide experiments
using 6 different protocols (i.e., DNS, HTTP, SMTP, EPMAP, ICMP and UDP
ECHO). We analyze the feedback generated and suggest novel approaches for reducing
the amount of blowback during similar studies, which should enable researchers
to collect valuable experimental data in the future with significantly fewer hurdles.
We finally turn our attention to Intrusion Detection Systems (IDS), which are
often tasked with detecting scans and preventing them; however, it is currently unknown
how likely an IDS is to detect a given Internet-wide scan pattern and whether
there exist sufficiently fast stealth techniques that can remain virtually undetectable
at large-scale. To address these questions, we propose a novel model for the windowexpiration
rules of popular IDS tools (i.e., Snort and Bro), derive the probability that
existing scan patterns (i.e., uniform and sequential) are detected by each of these
tools, and prove the existence of stealth-optimal patterns
Prévision du trafic Internet : modèles et applications
Avec l'essor de la métrologie de l'Internet, la prévision du trafic s'est imposée comme une de ses branches les plus importantes. C'est un outil puissant qui permet d'aider à la conception, la mise en place et la gestion des réseaux ainsi qu'à l'ingénierie du trafic et le contrôle des paramètres de qualité de service. L'objectif de cette thèse est d'étudier les techniques de prévision et d'évaluer la performance des modèles de prévision et de les appliquer pour la gestion des files d'attente et le contrôle du taux de perte dans les réseaux à commutation de rafales. Ainsi, on analyse les différents paramètres qui permettent d'améliorer la performance de la prévision en termes d'erreur. Les paramètres étudiés sont : la quantité de données nécessaires pour définir les paramètres du modèle, leur granularité, le nombre d'entrées du modèle ainsi que les caractéristiques du trafic telles que sa variance et la distribution de la taille des paquets. Nous proposons aussi une technique d'échantillonnage baptisée échantillonnage basé sur le maximum (Max-Based Sampling - MBS). Nous prouvons son efficacité pour améliorer la performance de la prévision et préserver l'auto-similarité et la dépendance à long terme du trafic. \ud
Le travail porte aussi sur l'exploitation de la prévision du trafic pour la gestion du trafic et le contrôle du taux de perte dans les réseaux à commutation de rafales. Ainsi, nous proposons un nouveau mécanisme de gestion de files d'attente, baptisé α_SNFAQM, qui est basé sur la prévision du trafic. Ce mécanisme permet de stabiliser la taille de la file d'attente et par suite, contrôler les délais d'attente des paquets. Nous proposons aussi une nouvelle technique qui permet de garantir la qualité de service dans les réseaux à commutation de rafales en termes de taux de perte. Elle combine entre la modélisation, la prévision du trafic et les systèmes asservis avec feedback. Elle permet de contrôler efficacement le taux de perte des rafales pour chaque classe de service. Le modèle est ensuite amélioré afin d'éviter les feedbacks du réseau en utilisant la prévision du taux de perte au niveau TCP. \ud
______________________________________________________________________________ \ud
MOTS-CLÉS DE L’AUTEUR : Modélisation et prévision du trafic, techniques d'échantillonnage, gestion des files d'attente, réseaux à commutation de rafales, contrôle du taux de perte, qualité de service, l'automatique