31 research outputs found

    STCP: A New Transport Protocol for High-Speed Networks

    Get PDF
    Transmission Control Protocol (TCP) is the dominant transport protocol today and likely to be adopted in future high‐speed and optical networks. A number of literature works have been done to modify or tune the Additive Increase Multiplicative Decrease (AIMD) principle in TCP to enhance the network performance. In this work, to efficiently take advantage of the available high bandwidth from the high‐speed and optical infrastructures, we propose a Stratified TCP (STCP) employing parallel virtual transmission layers in high‐speed networks. In this technique, the AIMD principle of TCP is modified to make more aggressive and efficient probing of the available link bandwidth, which in turn increases the performance. Simulation results show that STCP offers a considerable improvement in performance when compared with other TCP variants such as the conventional TCP protocol and Layered TCP (LTCP)

    TCP Veno: TCP enhancement for transmission over wireless access networks

    Full text link

    A study of the effects of TCP designs on server efficiency and throughputs on wired and wireless networks.

    Get PDF
    Yeung, Fei-Fei.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 144-146).Abstracts in English and Chinese.Introduction --- p.1Chapter Part I: --- A New Socket API for Enhancing Server Efficiency --- p.5Chapter Chapter 1 --- Introduction --- p.6Chapter 1.1 --- Brief Background --- p.6Chapter 1.2 --- Deficiencies of Nagle's Algorithm and Goals and Objectives of this Research --- p.7Chapter 1.2.1 --- Effectiveness of Nagle's Algorithm --- p.7Chapter 1.2.2 --- Preventing Small Packets via Application Layer --- p.9Chapter 1.2.3 --- Minimum Delay in TCP Buffer --- p.10Chapter 1.2.4 --- Maximum Delay in TCP Buffer --- p.11Chapter 1.2.5 --- New Socket API --- p.12Chapter 1.3 --- Scope of Research and Summary of Contributions --- p.12Chapter 1.4 --- Organization of Part 1 --- p.13Chapter Chapter 2 --- Background --- p.14Chapter 2.1 --- Review of Nagle's Algorithm --- p.14Chapter 2.2 --- Additional Problems Inherent in Nagle's Algorithm --- p.17Chapter 2.3 --- Previous Proposed Modifications on Nagle's Algorithm --- p.22Chapter 2.3.1 --- The Minshall Modification --- p.22Chapter 2.3.1.1 --- The Minshall Modification --- p.22Chapter 2.3.1.2 --- The Minshall et al. Modification --- p.23Chapter 2.3.2 --- The Borman Modification --- p.23Chapter 2.3.3 --- The Jeffrey et al. Modification --- p.25Chapter 2.3.3.1 --- The EOM and MORE Variants --- p.25Chapter 2.3.3.2 --- The DLDET Variant --- p.26Chapter 2.3.4 --- Comparison Between Our Proposal and Related Works --- p.26Chapter Chapter 3 --- Min-Delay-Max-Delay TCP Buffering --- p.28Chapter 3.1 --- Minimum Delay --- p.29Chapter 3.1.1 --- Why Enabling Nagle's Algorithm Alone is Not a Solution? --- p.29Chapter 3.1.2 --- Advantages of Min-Delay TCP-layer Buffering versus Application-layer Buffering --- p.30Chapter 3.2 --- Maximum Delay --- p.32Chapter 3.2.1 --- Why Enabling Nagle's Algorithm Alone is Not a Solution? --- p.32Chapter 3.2.2 --- Advantages of Max-delay TCP Buffering versus Nagle's Algorithm --- p.33Chapter 3.3 --- Interaction with Nagle's Algorithm --- p.34Chapter 3.4 --- When to Apply Our Proposed Scheme? --- p.36Chapter 3.5 --- New Socket Option Description --- p.38Chapter 3.6 --- Implementation --- p.40Chapter 3.6.1 --- Small Packet Transmission Decision Logic --- p.42Chapter 3.6.2 --- Modified API --- p.44Chapter Chapter 4 --- Experiments --- p.46Chapter 4.1 --- The Effect of Kernel Buffering Mechanism on the Service Time --- p.47Chapter 4.1.1 --- Aims and Methodology --- p.47Chapter 4.1.2 --- Comparison of Transmission Time Required --- p.49Chapter 4.2 --- Performance of Min-Delay-Max-Delay Scheme --- p.56Chapter 4.2.1 --- Methodology --- p.56Chapter 4.2.1.1 --- Network Setup --- p.56Chapter 4.2.1.2 --- Traffic Model --- p.58Chapter 4.2.1.3 --- Delay Measurement --- p.60Chapter 4.2.2 --- Efficiency of Busy Server --- p.62Chapter 4.2.2.1 --- Performance of Nagle's algorithm --- p.62Chapter 4.2.2.2 --- Performance of Min-Delay TCP Buffering Scheme --- p.67Chapter 4.2.3 --- Limiting Delay by Setting TCP´ؤMAXDELAY --- p.70Chapter 4.3 --- Performance Sensitivity Discussion --- p.77Chapter 4.3.1 --- Sensitivity to Data Size per Invocation of send() --- p.77Chapter 4.3.2 --- Sensitivity to Minimum Delay --- p.83Chapter 4.3.3 --- Sensitivity to Round Trip Time --- p.85Chapter Chapter 5 --- Conclusion --- p.88Chapter Part II: --- Two Analytical Models for a Refined TCP Algorithm (TCP Veno) for Wired/Wireless Networks --- p.91Chapter Chapter 1 --- Introduction --- p.92Chapter 1.1 --- Brief Background --- p.92Chapter 1.2 --- Motivation and Two Analytical Models --- p.95Chapter 1.3 --- Organization of Part II --- p.96Chapter Chapter 2 --- Background --- p.97Chapter 2.1 --- TCP Veno Algorithm --- p.97Chapter 2.1.1 --- Packet Loss Type Identification --- p.97Chapter 2.1.2 --- Refined AIMD Algorithm --- p.99Chapter 2.1.2.1 --- Random Loss Management --- p.99Chapter 2.1.2.2 --- Congestion Management --- p.100Chapter 2.2 --- A Simple Model of TCP Reno --- p.101Chapter 2.3 --- Stochastic Modeling of TCP Reno over Lossy Channels --- p.103Chapter Chapter 3 --- Two Analytical Models --- p.104Chapter 3.1 --- Simple Model --- p.104Chapter 3.1.1 --- Random-loss Only Case --- p.105Chapter 3.1.2 --- Congestion-loss Only Case --- p.108Chapter 3.1.3 --- The General Case (Random + Congestion Loss) --- p.110Chapter 3.2 --- Markov Model --- p.115Chapter 3.2.1 --- Congestion Window Evolution --- p.115Chapter 3.2.2 --- Average Throughput Formulating --- p.119Chapter 3.2.2.1 --- Random-loss Only Case --- p.120Chapter 3.2.2.2 --- Congestion-loss Only Case --- p.122Chapter 3.2.2.3 --- The General Case (Random + Congestion Loss) --- p.123Chapter Chapter 4 --- Comparison with Experimental Results and Discussions --- p.127Chapter 4.1 --- Throughput versus Random Loss Probability --- p.127Chapter 4.2 --- Throughput versus Normalized Buffer Size --- p.132Chapter 4.3 --- Throughput versus Bandwidth in Asymmetric Networks --- p.135Chapter 4.3 --- Summary --- p.136Chapter Chapter 5 --- Sensitivity of TCP Veno Throughput to Various Parameters --- p.137Chapter 5.1 --- Multiplicative Decrease Factor (α) --- p.137Chapter 5.2 --- Number of Backlogs (β) and Fractional Increase Factor (γ) --- p.139Chapter Chapter 6 --- Conclusions --- p.142Bibliography --- p.14

    Novel methods of utilizing Jitter for Network Congestion Control

    Get PDF
    This paper proposes a novel paradigm for network congestion control. Instead of perpetual conflict as in TCP, a proof-of-concept first-ever protocol enabling inter-flow communication without infrastructure support thru a side channel constructed on generic FIFO queue behaviour is presented. This enables independent flows passing thru the same bottleneck queue to communicate and achieve fair capacity sharing and a stable equilibrium state in a rapid fashion

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    A User-level, Reliable and Reconfigurable Transport Layer Protocol

    Get PDF
    Over the past 15 years, the Internet has proven itself to be one of the most influential inventions that humankind has ever conceived. The success of the Internet can be largely attributed to its stability and ease of access. Among the various pieces of technologies that constitute the Internet, TCP/IP can be regarded as the cornerstone to the Internet’s impressive scalability and stability. Many researchers have been and are currently actively engaged in the studies on the optimization of TCP’s performance in various network environments. This thesis presents an alternative transport layer protocol called RRTP, which is designed to provide reliable transport layer services to software applications. The motivation for this work comes from the fact that the most commonly used versions of TCP perform unsatisfactorily when they are deployed over non-conventional network platforms such as cellular/wireless, satellite, and long fat pipe networks. These non-conventional networks usually have higher network latency and link failure rate as compared with the conventional wired networks and the classic versions of TCP are unable to adapt to these characteristics. This thesis attempts to address this problem by introducing a user-level, reliable, and reconfigurable transport layer protocol that runs on top of UDP and appropriately tends to the characteristics of non-conventional networks that TCP by default ignores. A novel aspect of RRTP lies in identifying three key characteristic parameters of a network to optimize its performance. The single most important contribution of this work is its empirical demonstration of the fact that parameter-based, user-configurable, flow-control and congestion-control algorithms are highly effective at adapting to and fully utilizing various networks. This fact is demonstrated through experiments designed to benchmark the performance of RRTP against that of TCP on simulated as well as real-life networks. The experimental results indicate that the performance of RRTP consistently match and exceed TCP’s performance on all major network platforms. This leads to the conclusion that a user-level, reliable, and reconfigurable transport-layer protocol, which possesses the essential characteristics of RRTP, would serve as a viable replacement for TCP over today’s heterogeneous network platforms

    Improved algorithms for TCP congestion control

    Get PDF
    Reliable and efficient data transfer on the Internet is an important issue. Since late 70’s the protocol responsible for that has been the de facto standard TCP, which has proven to be successful through out the years, its self-managed congestion control algorithms have retained the stability of the Internet for decades. However, the variety of existing new technologies such as high-speed networks (e.g. fibre optics) with high-speed long-delay set-up (e.g. cross-Atlantic links) and wireless technologies have posed lots of challenges to TCP congestion control algorithms. The congestion control research community proposed solutions to most of these challenges. This dissertation adds to the existing work by: firstly tackling the highspeed long-delay problem of TCP, we propose enhancements to one of the existing TCP variants (part of Linux kernel stack). We then propose our own variant: TCP-Gentle. Secondly, tackling the challenge of differentiating the wireless loss from congestive loss in a passive way and we propose a novel loss differentiation algorithm which quantifies the noise in packet inter arrival times and use this information together with the span (ratio of maximum to minimum packet inter arrival times) to adapt the multiplicative decrease factor according to a predefined logical formula. Finally, extending the well-known drift model of TCP to account for wireless loss and some hypothetical cases (e.g. variable multiplicative decrease), we have undertaken stability analysis for the new version of the model

    Trustworthiness Mechanisms for Long-Distance Networks in Internet of Things

    Get PDF
    Aquesta tesi té com a objectiu aconseguir un intercanvi de dades fiable en un entorn hostil millorant-ne la confiabilitat mitjançant el disseny d'un model complet que tingui en compte les diferents capes de confiabilitat i mitjançant la implementació de les contramesures associades al model. La tesi se centra en el cas d'ús del projecte SHETLAND-NET, amb l'objectiu de desplegar una arquitectura d'Internet de les coses (IoT) híbrida amb comunicacions LoRa i d'ona ionosfèrica d'incidència gairebé vertical (NVIS) per oferir un servei de telemetria per al monitoratge del “permafrost” a l'Antàrtida. Per complir els objectius de la tesi, en primer lloc, es fa una revisió de l'estat de l'art en confiabilitat per proposar una definició i l'abast del terme de confiança. Partint d'aquí, es dissenya un model de confiabilitat de quatre capes, on cada capa es caracteritza pel seu abast, mètrica per a la quantificació de la confiabilitat, contramesures per a la millora de la confiabilitat i les interdependències amb les altres capes. Aquest model permet el mesurament i l'avaluació de la confiabilitat del cas d'ús a l'Antàrtida. Donades les condicions hostils i les limitacions de la tecnologia utilitzada en aquest cas d’ús, es valida el model i s’avalua el servei de telemetria a través de simulacions en Riverbed Modeler. Per obtenir valors anticipats de la confiabilitat esperada, l'arquitectura proposada es modela per avaluar els resultats amb diferents configuracions previ al seu desplegament en proves de camp. L'arquitectura proposada passa per tres principals iteracions de millora de la confiabilitat. A la primera iteració, s'explora l'ús de mecanismes de consens i gestió de la confiança social per aprofitar la redundància de sensors. En la segona iteració, s’avalua l’ús de protocols de transport moderns per al cas d’ús antàrtic. L’última iteració d’aquesta tesi avalua l’ús d’una arquitectura de xarxa tolerant al retard (DTN) utilitzant el Bundle Protocol (BP) per millorar la confiabilitat del sistema. Finalment, es presenta una prova de concepte (PoC) amb maquinari real que es va desplegar a la campanya antàrtica 2021-2022, descrivint les proves de camp funcionals realitzades a l'Antàrtida i Catalunya.Esta tesis tiene como objetivo lograr un intercambio de datos confiable en un entorno hostil mejorando su confiabilidad mediante el diseño de un modelo completo que tenga en cuenta las diferentes capas de confiabilidad y mediante la implementación de las contramedidas asociadas al modelo. La tesis se centra en el caso de uso del proyecto SHETLAND-NET, con el objetivo de desplegar una arquitectura de Internet de las cosas (IoT) híbrida con comunicaciones LoRa y de onda ionosférica de incidencia casi vertical (NVIS) para ofrecer un servicio de telemetría para el monitoreo del “permafrost” en la Antártida. Para cumplir con los objetivos de la tesis, en primer lugar, se realiza una revisión del estado del arte en confiabilidad para proponer una definición y alcance del término confiabilidad. Partiendo de aquí, se diseña un modelo de confiabilidad de cuatro capas, donde cada capa se caracteriza por su alcance, métrica para la cuantificación de la confiabilidad, contramedidas para la mejora de la confiabilidad y las interdependencias con las otras capas. Este modelo permite la medición y evaluación de la confiabilidad del caso de uso en la Antártida. Dadas las condiciones hostiles y las limitaciones de la tecnología utilizada en este caso de uso, se valida el modelo y se evalúa el servicio de telemetría a través de simulaciones en Riverbed Modeler. Para obtener valores anticipados de la confiabilidad esperada, la arquitectura propuesta es modelada para evaluar los resultados con diferentes configuraciones previo a su despliegue en pruebas de campo. La arquitectura propuesta pasa por tres iteraciones principales de mejora de la confiabilidad. En la primera iteración, se explora el uso de mecanismos de consenso y gestión de la confianza social para aprovechar la redundancia de sensores. En la segunda iteración, se evalúa el uso de protocolos de transporte modernos para el caso de uso antártico. La última iteración de esta tesis evalúa el uso de una arquitectura de red tolerante al retardo (DTN) utilizando el Bundle Protocol (BP) para mejorar la confiabilidad del sistema. Finalmente, se presenta una prueba de concepto (PoC) con hardware real que se desplegó en la campaña antártica 2021-2022, describiendo las pruebas de campo funcionales realizadas en la Antártida y Cataluña.This thesis aims at achieving reliable data exchange over a harsh environment by improving its trustworthiness through the design of a complete model that takes into account the different layers of trustworthiness and through the implementation of the model’s associated countermeasures. The thesis focuses on the use case of the SHETLAND-NET project, aiming to deploy a hybrid Internet of Things (IoT) architecture with LoRa and Near Vertical Incidence Skywave (NVIS) communications to offer a telemetry service for permafrost monitoring in Antarctica. To accomplish the thesis objectives, first, a review of the state of the art in trustworthiness is carried out to propose a definition and scope of the trustworthiness term. From these, a four-layer trustworthiness model is designed, with each layer characterized by its scope, metric for trustworthiness accountability, countermeasures for trustworthiness improvement, and the interdependencies with the other layers. This model enables trustworthiness accountability and assessment of the Antarctic use case. Given the harsh conditions and the limitations of the use technology in this use case, the model is validated and the telemetry service is evaluated through simulations in Riverbed Modeler. To obtain anticipated values of the expected trustworthiness, the proposal has been modeled to evaluate the performance with different configurations prior to its deployment in the field. The proposed architecture goes through three major iterations of trustworthiness improvement. In the first iteration, using social trust management and consensus mechanisms is explored to take advantage of sensor redundancy. In the second iteration, the use of modern transport protocols is evaluated for the Antarctic use case. The final iteration of this thesis assesses using a Delay Tolerant Network (DTN) architecture using the Bundle Protocol (BP) to improve the system’s trustworthiness. Finally, a Proof of Concept (PoC) with real hardware that was deployed in the 2021-2022 Antarctic campaign is presented, describing the functional tests performed in Antarctica and Catalonia
    corecore