32 research outputs found

    Observing TCP dynamics in real networks

    Full text link

    TCP ex Machina: Computer-Generated Congestion Control

    Get PDF

    Study and Performance Analysis of LTE MAC Schedulers for M2M

    Get PDF
    Cellular systems are forecasted to play a fundamental role in the future Machine-to-Machine scenario. The 3GPP LTE networks appear to be the de facto standard for machine type communications. Beside opportunities given by M2M devices spreading, such as vehicular-to-vehicular communications and environmental monitoring, the operators will have to deal with a greater number of devices connected, which don t fulfil with the nowadays human centred traffic standard characteristics and will force to re-standardize the current uplink scheduling procedures. In this report an extensive study of the literature papers, both standards and M2M centred, is carried on a dense machine scenario. Problems such as Human-to-Human throughput drops are high lined with the raise of the number of M2M devices present. Moreover the channel utilization, both in uplink and in downlink, drops drastically among the studied scheduler schemes. A new M2M Aware Scheduler, with the goal of maximise the medium utilization, is designed and implemented. Its results over the simulations shows that M2M and Human-to-Human devices could live over the same LTE Network but new schedulers, such as the one here presented, have to be further studied and analysed. This thesis work was carried on during an exchange period at the Norwegian University of Science and Technology in Trondheim with the collaboration and supervision of Telenor Norge AS

    On modeling and mitigating new breed of dos attacks

    Get PDF
    Denial of Service (DoS) attacks pose serious threats to the Internet, exerting in tremendous impact on our daily lives that are heavily dependent on the good health of the Internet. This dissertation aims to achieve two objectives:1) to model new possibilities of the low rate DoS attacks; 2) to develop effective mitigation mechanisms to counter the threat from low rate DoS attacks. A new stealthy DDoS attack model referred to as the quiet attack is proposed in this dissertation. The attack traffic consists of TCP traffic only. Widely used botnets in today\u27s various attacks and newly introduced network feedback control are integral part of the quiet attack model. The quiet attack shows that short-lived TCP flows used as attack flows can be intentionally misused. This dissertation proposes another attack model referred to as the perfect storm which uses a combination of UDP and TCP. Better CAPTCHAs are highlighted as current defense against botnets to mitigate the quiet attack and the perfect storm. A novel time domain technique is proposed that relies on the time difference between subsequent packets of each flow to detect periodicity of the low rate DoS attack flow. An attacker can easily use different IP address spoofing techniques or botnets to launch a low rate DoS attack and fool the detection system. To mitigate such a threat, this dissertation proposes a second detection algorithm that detects the sudden increase in the traffic load of all the expired flows within a short period. In a network rate DoS attacks, it is shown that the traffic load of all the expired flows is less than certain thresholds, which are derived from real Internet traffic analysis. A novel filtering scheme is proposed to drop the low rate DoS attack packets. The simulation results confirm attack mitigation by using proposed technique. Future research directions will be briefly discussed

    Moving toward the intra-protocol de-ossification of TCP in mobile networks: Start-up and mobility

    Get PDF
    182 p.El uso de las redes móviles de banda ancha ha aumentado significativamente los últimos años y se espera un crecimiento aún mayor con la inclusión de las futuras capacidades 5G. 5G proporcionará unas velocidades de transmisión y reducidos retardos nunca antes vistos. Sin embargo, la posibilidad de alcanzar las mencionadas cuotas está limitada por la gestión y rendimiento de los protocolos de transporte. A este respecto, TCP sigue siendo el protocolo de transporte imperante y sus diferentes algoritmos de control de congestión (CCA) los responsables finales del rendimiento obtenido. Mientras que originalmente los distintos CCAs han sido implementados para hacer frente a diferentes casos de uso en redes fijas, ninguno de los CCAs ha sido diseñado para poder gestionar la variabilidad de throughput y retardos de diferentes condiciones de red redes móviles de una manera fácilmente implantable. Dado que el análisis de TCP sobre redes móviles es complejo debido a los múltiples factores de impacto, nuestro trabajo se centra en dos casos de uso generalizados que resultan significativos en cuanto a afección del rendimiento: movimiento de los usuarios como representación de la característica principal de las redes móviles frente a las redes fijas y el rendimiento de la fase de Start-up de TCP debido a la presencia mayoritaria de flujos cortos en Internet. Diferentes trabajos han sugerido la importancia de una mayor flexibilidad en la capa de transporte, creando servicios de transporte sobre TCP o UDP. Sin embargo, estas propuestas han encontrado limitaciones relativas a las dependencias arquitecturales de los protocolos utilizados como sustrato (p.ej. imposibilidad de cambiar la configuración de la capa de transporte una vez la transmisión a comenzado), experimentando una capa de transporte "osificada". Esta tesis surge como respuesta a fin de abordar la citada limitación y demostrando que existen posibilidades de mejora dentro de la familia de TCP (intra-protocolar), proponiendo un marco para solventar parcialmente la restricción a través de la selección dinámica del CCA más apropiado. Para ello, se evalúan y seleccionan los mayores puntos de impacto en el rendimiento de los casos de uso seleccionados en despliegues de red 4G y en despliegues de baja latencia que emulan las potenciales latencias en las futuras capacidades 5G. Estos puntos de impacto sirven como heurísticas para decidir el CCA más apropiado en el propuesto marco. Por último, se valida la propuesta en entornos de movilidad con dos posibilidades de selección: al comienzo de la transmisión (limitada flexibilidad de la capa de transporte) y dinámicamente durante la transmisión (con una capa de transporte flexible). Se concluye que la propuesta puede acarrear importantes mejoras de rendimiento al seleccionar el CCA más apropiado teniendo en cuenta la situación de red y los requerimientos de la capa de aplicación

    Evaluation and optimisation of Less-than-Best-Effort TCP congestion control mechanisms

    Get PDF
    Increasing use of online software installation, updates, and backup services, as well as the popularity of user-generated content, has increased the demand for band-width in recent years. Traffic generated by these applications — when receiving a ‘fair-share’ of the available bandwidth — can impact the responsiveness of delay-sensitive applications. Less-than-Best-Effort TCP congestion control mechanisms aim to allow lower-priority applications to utilise excess bandwidth with minimum impact to regular TCP carrying delay-sensitive traffic. However, no previous study has evaluated the performance of a large number of this class of congestion con-trol mechanisms. This thesis quantifies the performance of existing Less-than-Best-Effort TCP congestion control mechanisms, and proposes a new mechanism to im-prove the performance of these mechanisms with high path delay. This study first evaluated the performance of seven Less-than-Best-Effort conges-tion control mechanisms in realistic scenarios under a range of network conditions in a Linux testbed incorporating wired Ethernet and 802.11n wireless links. The seven mechanisms evaluated were: Apple LEDBAT, CAIA Delay-Gradient (CDG), RFC6817 LEDBAT, Low Priority, Nice, Westwood-LP, and Vegas. Of these mecha-nisms, only four had existing implementations for modern operating systems. The remaining three mechanisms — Apple LEDBAT, Nice, and Westwood-LP — were implemented based on published descriptions and available code fragments to fa-cilitate this evaluation. The results of the evaluation suggest that Less-than-Best-Effort congestion control mechanisms can be divided into two categories: regular TCP-like mechanisms, and low-impact mechanisms. Of the low-impact mechanisms, two mechanisms were identified as having desirable performance characteristics: Nice and CDG. Nice pro-vides background throughput comparable to regular TCP while maintaining low queuing delay in low path delay settings. CDG has the least impact on regular TCP traffic, at the expense of reduced throughput. In high path-delay settings, these reductions to throughput experienced by CDG are exacerbated, while Nice has a greater impact on regular TCP traffic. To address the very low throughput of existing Less-than-Best-Effort congestion control mechanisms in high path-delay settings, a new Less-than-Best-Effort TCP congestion control algorithm was developed and implemented: Yield TCP. Yield utilises elements of a Proportional-Integral controller to better interpret and re-spond to changes in queuing delay to achieve this goal while also reducing the impact on regular TCP traffic over TCP-like mechanisms. Source code for the im-plementation of Yield developed for this research has also been made available. The results of evaluating Yield indicate that it successfully addresses the low through-put of low-impact Less-than-Best-Effort mechanisms in high delay settings, while also reducing the impact on foreground traffic compared to regular TCP-like con-gestion control mechanisms. Yield also performs similarly to Nice in low delay settings, while also achieving greater intra-protocol fairness than Nice across all settings. These results indicate that Yield addresses the weaknesses of Nice and CDG, and is a promising alternative to existing Less-than-Best-Effort congestion control algorithms

    Secure VoIP Performance Measurement

    Get PDF
    This project presents a mechanism for instrumentation of secure VoIP calls. The experiments were run under different network conditions and security systems. VoIP services such as Google Talk, Express Talk and Skype were under test. The project allowed analysis of the voice quality of the VoIP services based on the Mean Opinion Score (MOS) values generated by Perceptual valuation of Speech Quality (PESQ). The quality of the audio streams produced were subjected to end-to-end delay, jitter, packet loss and extra processing in the networking hardware and end devices due to Internetworking Layer security or Transport Layer security implementations. The MOS values were mapped to Perceptual Evaluation of Speech Quality for wideband (PESQ-WB) scores. From these PESQ-WB scores, the graphs of the mean of 10 runs and box and whisker plots for each parameter were drawn. Analysis on the graphs was performed in order to deduce the quality of each VoIP service. The E-model was used to predict the network readiness and Common vulnerability Scoring System (CVSS) was used to predict the network vulnerabilities. The project also provided the mechanism to measure the throughput for each test case. The overall performance of each VoIP service was determined by PESQ-WB scores, CVSS scores and the throughput. The experiment demonstrated the relationship among VoIP performance, VoIP security and VoIP service type. The experiment also suggested that, when compared to an unsecure IPIP tunnel, Internetworking Layer security like IPSec ESP or Transport Layer security like OpenVPN TLS would improve a VoIP security by reducing the vulnerabilities of the media part of the VoIP signal. Morever, adding a security layer has little impact on the VoIP voice quality

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    Performance of data aggregation for wireless sensor networks

    Get PDF
    This thesis focuses on three fundamental issues that concern data aggregation protocols for periodic data collection in sensor networks: which sensor nodes should report their data, when should they report it, and should they use unicast or broadcast based protocols for this purpose. The issue of when nodes should report their data is considered in the context of real-time monitoring applications. The first part of this thesis shows that asynchronous aggregation, in which the time of each node’s transmission is determined adaptively based on its local history of past packet receptions from its children, outperforms synchronous aggregation by providing lower delay for a given end-to-end loss rate. Second, new broadcast-based aggregation protocols that minimize the number of packet transmissions, relying on multipath delivery rather than automatic repeat request for reliability, are designed and evaluated. The performance of broadcast-based aggregation is compared to that of unicast-based aggregation, in the context of both real-time and delay-tolerant data collection. Finally, this thesis investigates the potential benefits of dynamically, rather than semi-statically, determining the set of nodes reporting their data, in the context of applications in which coverage of some monitored region is to be maintained. Unicast and broadcast-based coverage-preserving data aggregation protocols are designed and evaluated. The performance of the proposed protocols is compared to that of data collection protocols relying on node scheduling
    corecore