494 research outputs found
Experimental performance of DCCP over live satellite and long range wireless links
We present experimental results for the performance over satellite and long range wireless (WiMax) links of the new TCP-Friendly Rate Control (TFRC) congestion control mechanism from the Datagram Congestion Control Protocol (DCCP) proposed for use with real-time traffic. We evaluate
the performance of the standard DCCP/CCID3 algorithm and
identify two problem areas: the measured DCCP/CCID3 rate
is inferior to the rate achievable with standard TCP and
a significant rate oscillation continuously occurs making the resulting rate variable even in the short term. We analyse the links and identify the potential causes, i.e. long and variable delay and link errors. As a second contribution, we propose a change in the DCCP/CCID3 algorithm in which the number of feedback messages is increased from the currently standard of at least one per return trip time. Although it is recognised that the increase in control traffic may decrease the overall efficiency, we demonstrate that the change results in higher data rates which are closer to what is achievable with TCP on those networks and that the overhead introduced remains acceptable
CloudJet4BigData: Streamlining Big Data via an Accelerated Socket Interface
Big data needs to feed users with fresh processing results and cloud platforms can be used to speed up big data applications. This paper describes a new data communication protocol (CloudJet) for long distance and large volume big data accessing operations to alleviate the large latencies encountered in sharing big data resources in the clouds. It encapsulates a dynamic multi-stream/multi-path engine at the socket level, which conforms to Portable Operating System Interface (POSIX) and thereby can accelerate any POSIX-compatible applications across IP based networks. It was demonstrated that CloudJet accelerates typical big data applications such as very large database (VLDB), data mining, media streaming and office applications by up to tenfold in real-world tests
Towards sender-based TFRC
Pervasive communications are increasingly sent over mobile devices and personal digital assistants. This trend has
been observed during the last football world cup where cellular phones service providers have measured a significant increase in multimedia traffic. To better carry multimedia traffic, the IETF standardized a new TCP Friendly Rate Control (TFRC) protocol. However, the current receiver-based TFRC design is not well suited to resource limited end systems. We propose a scheme to shift resource allocation and computation to the sender. This sender based approach led us to develop a new algorithm for loss notification and loss rate computation. We demonstrate the gain obtained in terms of memory requirements and CPU processing compared to the current design. Moreover this shifting solves security issues raised by classical TFRC implementations. We have implemented this new sender-based TFRC, named TFRC_light, and conducted measurements under real world conditions
NexGen D-TCP: Next generation dynamic TCP congestion control algorithm
With the advancement of wireless access networks and mmWave New Radio (NR), new applications emerged, which requires a high data rate. The random packet loss due to mobility and channel conditions in a wireless network is not negligible, which degrades the significant performance of the Transmission Control Protocol (TCP). The TCP has been extensively deployed for congestion control in the communication network during the last two decades. Different variants are proposed to improve the performance of TCP in various scenarios, specifically in lossy and high bandwidth-delay product (high- BDP) networks. Implementing a new TCP congestion control algorithm whose performance is applicable over a broad range of network conditions is still a challenge. In this article, we introduce and analyze a Dynamic TCP (D-TCP) congestion control algorithm overmmWave NR and LTE-A networks. The proposed D-TCP algorithm copes up with the mmWave channel fluctuations by estimating the available channel bandwidth. The estimated bandwidth is used to derive the congestion control factor N. The congestion window is increased/decreased adaptively based on the calculated congestion control factor. We evaluated the performance of D-TCP in terms of congestion window growth, goodput, fairness and compared it with legacy and existing TCP algorithms. We performed simulations of mmWave NR during LOS \u3c-\u3e NLOS transitions and showed that D-TCP curtails the impact of under-utilization during mobility. The simulation results and live air experiment points out that D-TCP achieves 32:9% gain in goodput as compared to TCPReno and attains 118:9% gain in throughput as compared to TCP-Cubic
Towards a sender-based TCP friendly rate control (TFRC) protocol
Pervasive communications are increasingly sent over mobile devices and personal digital assistants. This trend is currently observed by mobile phone service providers which have measured a significant increase in multimedia traffic. To better carry multimedia traffic, the IETF standardized a new TCP Friendly Rate Control (TFRC) protocol. However, the current receiver-based TFRC design is not well suited to resource limited end systems. In this paper, we propose a scheme to shift resource allocation and computation to the sender. This sender-based approach led us to develop a new algorithm for loss notification and loss-rate computation. We detail the complete implementation of a user-level prototype and demonstrate the gain obtained in terms of memory requirements and CPU processing compared to the current design. We also evaluate the performance obtained in terms of throughput smoothness and fairness with TCP and we note this shifting solves security issues raised by classical TFRC implementations
Deep learning TCP for mitigating NLoS impairments in 5G mmWave
5G and beyond 5G are revolutionizing cellular and ubiquitous networks with new features and capabilities. The new millimeter-wave frequency band can provide high data rates for the new generations of mobile networks but suffers from NLoS caused by obstacles, which causes packet drops that mislead TCP because the protocol interprets all drops as an indication of network congestion. The principal flaw of TCP in such networks is that the root for packet drops is not distinguishable for TCP, and the protocol takes it for granted that all losses are due to congestion. This paper presents a new TCP based on deep learning that can outperform other common TCPs in terms of throughput, RTT, and congestion window fluctuation. The primary contribution of deep learning is providing the ability to distinguish various conditions in the network. The simulation results revealed that the proposed protocol could outperform conventional TCPs such as Cubic, NewReno, Highspeed, and BBR.This research was funded in part by the Spanish MCIN/AEI/ 10.13039/501100011033 through
project PID2019-106808RA-I00", and by Secretaria d’Universitats i Recerca del departament d’Empresa
i Coneixement de la Generalitat de Catalunya with the grant number 2021 SGR 00330.Peer ReviewedPostprint (published version
Agile-SD: A Linux-based TCP Congestion Control Algorithm for Supporting High-speed and Short-distance Networks
Recently, high-speed and short-distance networks are widely deployed and
their necessity is rapidly increasing everyday. This type of networks is used
in several network applications; such as Local Area Networks (LAN) and Data
Center Networks (DCN). In LANs and DCNs, high-speed and short-distance networks
are commonly deployed to connect between computing and storage elements in
order to provide rapid services. Indeed, the overall performance of such
networks is significantly influenced by the Congestion Control Algorithm (CCA)
which suffers from the problem of bandwidth under-utilization, especially if
the applied buffer regime is very small. In this paper, a novel loss-based CCA
tailored for high-speed and Short-Distance (SD) networks, namely Agile-SD, has
been proposed. The main contribution of the proposed CCA is to implement the
mechanism of agility factor. Further, intensive simulation experiments have
been carried out to evaluate the performance of Agile-SD compared to Compound
and Cubic which are the default CCAs of the most commonly used operating
systems. The results of the simulation experiments show that the proposed CCA
outperforms the compared CCAs in terms of average throughput, loss ratio and
fairness, especially when a small buffer is applied. Moreover, Agile-SD shows
lower sensitivity to the buffer size change and packet error rate variation
which increases its efficiency.Comment: 12 Page
- …