16 research outputs found

    CLIFT: a Cross-Layer InFormation Tool for Latency Analysis Based on Real Satellite Physical Traces

    Get PDF
    New mobile technology generations succeed in achieving high goodput, which results in diverse applications profiles exploiting various resource providers (Wifi, 4G, 5G, . . . ). Badly set parameters on one of the network component may severely impact on the transmission delay and reduce the quality of experience. The cross layer impact should be investigated on to assess the origin of latency. To run cross-layer (from physical layer to application layers) simulations, two approaches are possible: (1) use physical layer models that may not be exhaustive enough to drive consistent analysis or (2) use real physical traces. Driving realistic measurements by using real physical (MAC/PHY) traces inside network simulations is a complex task. We propose to cope with this problem by introducing Cross Layer InFormation Tool (CLIFT), that translates real physical events from a given trace in order to be used inside a network simulator such as ns-2. Our proposal enables to accurately perform analysis of the impact of link layer reliability schemes (obtained by the use of real physical traces) on transport layer performance and on the latency. Such approach enables a better understanding of the interactions between the layers. The main objective of CLIFT is to let us study the protocols introduced at each layer of the OSI model and study their interaction. We detail the internal mechanisms and the benefits of this software with a running example on 4G satellite communications scenarios

    On the impact of link layer retransmission schemes on TCP over 4G satellite links

    Get PDF
    We study the impact of reliability mechanisms introduced at the link layer on the performance of transport protocols in the context of 4G satellite links. Specifically, we design a software module that performs realistic analysis of the network performance, by utilizing real physical layer traces of a 4G satellite service. Based on these traces, our software module produces equivalent link layer traces, as a function of the chosen link layer reliability mechanism. We further utilize the link layer traces within the ns-2 network simulator to evaluate the impact of link layer schemes on the performance of selected Transmission Control Protocol (TCP) variants. We consider erasure coding, selective-repeat automatic request (ARQ) and hybrid-ARQ link layer mechanisms, and TCP Cubic, Compound, Hybla, New Reno and Westwood. We show that, for all target TCP variants, when the throughput of the transport protocol is close to the channel capacity, using the ARQ mechanism is most beneficial for TCP performance improvement. In conditions where the physical channel error rate is high, hybrid-ARQ results in the best performance for all TCP variants considered, with up to 22% improvements compared to other schemes

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    Congestion mitigation in LTE base stations using radio resource allocation techniques with TCP end to end transport

    Get PDF
    As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory. As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory

    Cross-layer latency-aware and -predictable data communication

    Get PDF
    Cyber-physical systems are making their way into more aspects of everyday life. These systems are increasingly distributed and hence require networked communication to coordinatively fulfil control tasks. Providing this in a robust and resilient manner demands for latency-awareness and -predictability at all layers of the communication and computation stack. This thesis addresses how these two latency-related properties can be implemented at the transport layer to serve control applications in ways that traditional approaches such as TCP or RTP cannot. Thereto, the Predictably Reliable Real-time Transport (PRRT) protocol is presented, including its unique features (e.g. partially reliable, ordered, in-time delivery, and latency-avoiding congestion control) and unconventional APIs. This protocol has been intensively evaluated using the X-Lap toolkit that has been specifically developed to support protocol designers in improving latency, timing, and energy characteristics of protocols in a cross-layer, intra-host fashion. PRRT effectively circumvents latency-inducing bufferbloat using X-Pace, an implementation of the cross-layer pacing approach presented in this thesis. This is shown using experimental evaluations on real Internet paths. Apart from PRRT, this thesis presents means to make TCP-based transport aware of individual link latencies and increases the predictability of the end-to-end delays using Transparent Transmission Segmentation.Cyber-physikalische Systeme werden immer relevanter für viele Aspekte des Alltages. Sie sind zunehmend verteilt und benötigen daher Netzwerktechnik zur koordinierten Erfüllung von Regelungsaufgaben. Um dies auf eine robuste und zuverlässige Art zu tun, ist Latenz-Bewusstsein und -Prädizierbarkeit auf allen Ebenen der Informations- und Kommunikationstechnik nötig. Diese Dissertation beschäftigt sich mit der Implementierung dieser zwei Latenz-Eigenschaften auf der Transport-Schicht, sodass Regelungsanwendungen deutlich besser unterstützt werden als es traditionelle Ansätze, wie TCP oder RTP, können. Hierzu wird das PRRT-Protokoll vorgestellt, inklusive seiner besonderen Eigenschaften (z.B. partiell zuverlässige, geordnete, rechtzeitige Auslieferung sowie Latenz-vermeidende Staukontrolle) und unkonventioneller API. Das Protokoll wird mit Hilfe von X-Lap evaluiert, welches speziell dafür entwickelt wurde Protokoll-Designer dabei zu unterstützen die Latenz-, Timing- und Energie-Eigenschaften von Protokollen zu verbessern. PRRT vermeidet Latenz-verursachenden Bufferbloat mit Hilfe von X-Pace, einer Cross-Layer Pacing Implementierung, die in dieser Arbeit präsentiert und mit Experimenten auf realen Internet-Pfaden evaluiert wird. Neben PRRT behandelt diese Arbeit transparente Übertragungssegmentierung, welche dazu dient dem TCP-basierten Transport individuelle Link-Latenzen bewusst zu machen und so die Vorhersagbarkeit der Ende-zu-Ende Latenz zu erhöhen

    The application of forward error correction techniques in wireless ATM

    Get PDF
    Bibliography: pages 116-121.The possibility of providing wireless access to an ATM network promises nomadic users a communication tool of unparalleled power and flexibility. Unfortunately, the physical realization of a wireless A TM system is fraught with technical difficulties, not the least of which is the problem of supporting a traditional ATM protocol over a non-benign wireless link. The objective of this thesis, titled "The Application of Forward Error Correction Techniques in Wireless ATM' is to examine the feasibility of using forward error correction techniques to improve the perceived channel characteristics to the extent that the channel becomes transparent to the higher layers and allows the use of an unmodified A TM protocol over the channel. In the course of the investigation that this dissertation describes, three possible error control strategies were suggested for implementation in a generic wireless channel. These schemes used a combination of forward error correction coding schemes, automatic repeat request schemes and interleavers to combat the impact of bit errors on the performance of the link. The following error control strategies were considered : 1. A stand alone fixed rate Reed-Solomon encoder/decoder with automatic repeat request. 2. A concatenated Reed-Solomon, convolution encoder/decoder with automatic request and convolution interleaving for the convolution codec. 3. A dynamic rate encoder/decoder using either a concatenated Reed-Solomon, convolution scheme or a Reed-Solomon only scheme with variable length Reed-Solomon words

    Transporte de informação directamente sobre sistemas de comunicação ópticos

    Get PDF
    Mestrado em Engenharia ElectrónicaNum futuro próximo, a informação transmitida entre vários utilizadores, seja áudio, vídeo ou dados, poderá ser transportada directamente sobre redes ópticas. Neste sentido, têm sido estudadas e analisadas várias tecnologias emergentes de redes ópticas, que culminaram com o aparecimento de soluções que permitem a integração de redes IP (Internet Protocol) sobre redes ópticas. Tendo em vista este cenário, o objectivo deste trabalho foi o estudo dos mecanismos de transporte de informação sobre sistemas de comunicação ópticos. Foi dada especial relevância a tecnologias ópticas multicanal utilizadas actualmente, o Wavelength Division Multiplexing (WDM) e o Multiprotocol Label Switching (MPLS). Uma vez que uma das formas usuais de avaliar o impacto da camada física nos sistemas de comunicação é através das taxa de erro binários, foi efectuada a caracterização da camada física em termos de taxas de erros binários e da probabilidade de erros na transmissão de pacotes de informação. Este estudo englobou várias fases, nomeadamente a caracterização do meio de transmissão, a fibra (através da taxa de erros binários e do factor Q), e a análise do impacto dos erros binários nas camadas de ligação de dados, de rede e de transporte, traduzida na probabilidade de erros em sequências de bits. Foi também abordado o impacto dos esquemas de detecção e/ou de correcção de erros utilizados nas várias camadas protocolares. Finalmente, foi analisado e caracterizado o comportamento da rede em função das características físicas do canal de transmissão.In a near future, information (audio, video and data) may be transmitted between several users directly over optical networks. Several emerging technologies on optical networks, which allow the IP (Internet Protocol) integration in the optical domain, have already been widely studied and analyzed. Keeping in mind this scenario, the goal of this work was the study of the information transport mechanisms over optical communication systems. Special attention was given to technologies currently used, the Wavelength Division Multiplexing (WDM) and the Multiprotocol Label Switching (MPLS). The Bit Error Rate (BER) is used as a measure of the negative effects of all physical impairments on the fibre, being usually a comprehensive criterion for the evaluation of the signal transmission quality. This way, the physical layer characterization was made in terms of BER and/or packet error rate (PER). This study concerned several stages: the fibre characterization in terms of BER and Q-factor, the study of the impact of the binary errors in the network behaviour, and the study and analysis of the error detection and correction schemes used in the several layers. Finally, the network behaviour was analysed and characterized as a function of the channel physical characteristics and constraints

    Medium access control, error control and routing in underwater acoustic networks: a discussion on protocol design and implementation

    Get PDF
    The journey of underwater communication which began from Leonardo’s era took four and a half centuries to find practical applications for military purposes during World War II. However, over the last three decades, underwater acoustic communications witnessed a massive development due to the advancements in the design of underwater communicating peripherals and their supporting protocols. Successively, doors are opened for a wide range of applications to employ in the underwater environment, such as oceanography, pollution monitoring, offshore exploration, disaster prevention, navigation assistance, monitoring, coastal patrol and surveillance. Different applications may have different characteristics and hence, may require different network architectures. For instance, routing protocols designed for unpartitioned multi-hop networks are not suitable for Delay-Tolerant Networks. Furthermore, single-hop networks do not need routing protocols at all. Therefore, before developing a protocol one must study the network architecture properly and design it accordingly. There are several other factors which should also be considered with the network architecture while designing an efficient protocol for underwater networks, such as long propagation delay, limited bandwidth, limited battery power, high bit error rate of the channel and several other adverse properties of the channel, such as, multi-path, fading and refractive behaviors. Moreover, the environment also has an impact on the performance of the protocols designed for underwater networks. Even temperature changes in a single day have an impact on the performance of the protocols. A good protocol designed for any network should consider some or all of these characteristics to achieve better performance. In this thesis, we first discuss the impact of the environment on the performance of MAC and routing protocols. From our investigation, we discover that even temperature changes within a day may affect the sound speed profile and hence, the channel changes and the protocol performance vary. After that we discuss several protocols which are specifically designed for underwater acoustic networks to serve different purposes and for different network architectures. Underwater Selective Repeat (USR) is an error control protocol designed to assure reliable data transmission in the MAC layer. One may suspect that employing an error control technique over a channel which already suffers from long propagation delays is a burden. However, USR utilizes long propagation by transmitting multiple packets in a single RTT using an interlacing technique. After USR, a routing protocol for surveillance networks is discussed where some sensors are laid down at the bottom of the sea and some sinks are placed outside the area. If a sensor detects an asset within its detection range, it announces the presence of intruders by transmitting packets to the sinks. It may happen that the discovered asset is an enemy ship or an enemy submarine which creates noise to jam the network. Therefore, in surveillance networks, it is necessary that the protocols have jamming resistance capabilities. Moreover, since the network supports multiple sinks with similar anycast address, we propose a Jamming Resistance multi-path Multi-Sink Routing Protocol (MSRP) using a source routing technique. However, the problem of source routing is that it suffers from large overhead (every packet includes the whole path information) with respect to other routing techniques, and also suffers from the unidirectional link problem. Therefore, another routing protocol based on a distance vector technique, called Multi-path Routing with Limited Cross-Path Interference (L-CROP) protocol is proposed, which employs a neighbor-aware multi-path discovery algorithm to support low interference multiple paths between each source-destination pair. Following that, another routing protocol is discussed for next generation coastal patrol and surveillance network, called Underwater Delay-Tolerant Network (UDTN) routing where some AUVs carry out the patrolling work of a given area and report to a shore based control-center. Since the area to be patrolled is large, AUVs experience intermittent connectivity. In our proposed protocol, two nodes that understand to be in contact with each other calculate and divide their contact duration equally so that every node gets a fair share of the contact duration to exchange data. Moreover, a probabilistic spray technique is employed to restrict the number of packet transmissions and for error correction a modified version of USR is employed. In the appendix, we discuss a framework which was designed by our research group to realize underwater communication through simulation which is used in most of the simulations in this thesis, called DESERT Underwater (short for DEsign, Simulate, Emulate and Realize Test-beds for Underwater network protocols). It is an underwater extension of the NS-Miracle simulator to support the design and implementation of underwater network protocols. Its creation assists the researchers in to utilizing the same codes designed for the simulator to employ in actual hardware devices and test in the real underwater scenario
    corecore