17,689 research outputs found

    The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms

    Full text link
    Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such approach may be useful for average-case analysis but does not cover boundary-point (worst or best-case) scenarios. To synthesize boundary-point scenarios a more systematic approach is needed.In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. The algorithms used in our method utilize implicit backward search using branch and bound techniques and start from given target events. This aims to reduce the search complexity drastically. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average-case analyses. We hope for our method to serve as a model for applying systematic scenario generation to other multicast protocols.Comment: 24 pages, 10 figures, IEEE/ACM Transactions on Networking (ToN) [To appear

    Congestion mitigation in LTE base stations using radio resource allocation techniques with TCP end to end transport

    Get PDF
    As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory. As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory

    A survey of performance enhancement of transmission control protocol (TCP) in wireless ad hoc networks

    Get PDF
    This Article is provided by the Brunel Open Access Publishing Fund - Copyright @ 2011 Springer OpenTransmission control protocol (TCP), which provides reliable end-to-end data delivery, performs well in traditional wired network environments, while in wireless ad hoc networks, it does not perform well. Compared to wired networks, wireless ad hoc networks have some specific characteristics such as node mobility and a shared medium. Owing to these specific characteristics of wireless ad hoc networks, TCP faces particular problems with, for example, route failure, channel contention and high bit error rates. These factors are responsible for the performance degradation of TCP in wireless ad hoc networks. The research community has produced a wide range of proposals to improve the performance of TCP in wireless ad hoc networks. This article presents a survey of these proposals (approaches). A classification of TCP improvement proposals for wireless ad hoc networks is presented, which makes it easy to compare the proposals falling under the same category. Tables which summarize the approaches for quick overview are provided. Possible directions for further improvements in this area are suggested in the conclusions. The aim of the article is to enable the reader to quickly acquire an overview of the state of TCP in wireless ad hoc networks.This study is partly funded by Kohat University of Science & Technology (KUST), Pakistan, and the Higher Education Commission, Pakistan

    Kompics: a message-passing component model for building distributed systems

    Get PDF
    The Kompics component model and programming framework was designedto simplify the development of increasingly complex distributed systems. Systems built with Kompics leverage multi-core machines out of the box and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic debugging and reproducible performance evaluation of unmodified Kompics distributed systems. We describe the component model and show how to program and compose event-based distributed systems. We present the architectural patterns and abstractions that Kompics facilitates and we highlight a case study of a complex distributed middleware that we have built with Kompics. We show how our approach enables systematic development and evaluation of large-scale and dynamic distributed systems

    Towards sender-based TFRC

    Get PDF
    Pervasive communications are increasingly sent over mobile devices and personal digital assistants. This trend has been observed during the last football world cup where cellular phones service providers have measured a significant increase in multimedia traffic. To better carry multimedia traffic, the IETF standardized a new TCP Friendly Rate Control (TFRC) protocol. However, the current receiver-based TFRC design is not well suited to resource limited end systems. We propose a scheme to shift resource allocation and computation to the sender. This sender based approach led us to develop a new algorithm for loss notification and loss rate computation. We demonstrate the gain obtained in terms of memory requirements and CPU processing compared to the current design. Moreover this shifting solves security issues raised by classical TFRC implementations. We have implemented this new sender-based TFRC, named TFRC_light, and conducted measurements under real world conditions

    Probabilistic Routing Protocol for Intermittently Connected Networks

    Get PDF
    This document is a product of the Delay Tolerant Networking Research Group and has been reviewed by that group. No objections to its publication as an RFC were raised. This document defines PRoPHET, a Probabilistic Routing Protocol using History of Encounters and Transitivity. PRoPHET is a variant of the epidemic routing protocol for intermittently connected networks that operates by pruning the epidemic distribution tree to minimize resource usage while still attempting to achieve the best-case routing capabilities of epidemic routing. It is intended for use in sparse mesh networks where there is no guarantee that a fully connected path between the source and destination exists at any time, rendering traditional routing protocols unable to deliver messages between hosts. These networks are examples of networks where there is a disparity between the latency requirements of applications and the capabilities of the underlying network (networks often referred to as delay and disruption tolerant). The document presents an architectural overview followed by the protocol specification

    Full TCP/IP for 8-Bit architectures

    Get PDF
    We describe two small and portable TCP/IP implementations fulfilling the subset of RFC1122 requirements needed for full host-to-host interoperability. Our TCP/IP implementations do not sacrifice any of TCP's mechanisms such as urgent data or congestion control. They support IP fragment reassembly and the number of multiple simultaneous connections is limited only by the available RAM. Despite being small and simple, our implementations do not require their peers to have complex, full-size stacks, but can communicate with peers running a similarly light-weight stack. The code size is on the order of 10 kilobytes and RAM usage can be configured to be as low as a few hundred bytes
    corecore