268 research outputs found

    Investigations on making TCP robust against spurious retransmissions

    Get PDF
    Master'sMASTER OF SCIENC

    Transport congestion events detection (TCED): towards decorrelating congestion detection from TCP

    Get PDF
    TCP (Transmission Control Protocol) uses a loss-based algorithm to estimate whether the network is congested or not. The main difficulty for this algorithm is to distinguish spurious from real network congestion events. Other research studies have proposed to enhance the reliability of this congestion estimation by modifying the internal TCP algorithm. In this paper, we propose an original congestion event algorithm implemented independently of the TCP source code. Basically, we propose a modular architecture to implement a congestion event detection algorithm to cope with the increasing complexity of the TCP code and we use it to understand why some spurious congestion events might not be detected in some complex cases. We show that our proposal is able to increase the reliability of TCP NewReno congestion detection algorithm that might help to the design of detection criterion independent of the TCP code. We find out that solutions based only on RTT (Round-Trip Time) estimation are not accurate enough to cover all existing cases. Furthermore, we evaluate our algorithm with and without network reordering where other inaccuracies, not previously identified, occur

    TCP over CDMA2000 Networks: A Cross-Layer Measurement Study

    Full text link
    Modern cellular channels in 3G networks incorporate sophisticated power control and dynamic rate adaptation which can have significant impact on adaptive transport layer protocols, such as TCP. Though there exists studies that have evaluated the performance of TCP over such networks, they are based solely on observations at the transport layer and hence have no visibility into the impact of lower layer dynamics, which are a key characteristic of these networks. In this work, we present a detailed characterization of TCP behavior based on cross-layer measurement of transport layer, as well as RF and MAC layer parameters. In particular, through a series of active TCP/UDP experiments and measurement of the relevant variables at all three layers, we characterize both, the wireless scheduler and the radio link protocol in a commercial CDMA2000 network and assess their impact on TCP dynamics. Somewhat surprisingly, our findings indicate that the wireless scheduler is mostly insensitive to channel quality and sector load over short timescales and is mainly affected by the transport layer data rate. Furthermore, with the help of a robust correlation measure, Normalized Mutual Information, we were able to quantify the impact of the wireless scheduler and the radio link protocol on various TCP parameters such as the round trip time, throughput and packet loss rate

    On detection algorithms for spurious retransmissions in TCP

    Get PDF
    In TCP, a spurious packet retransmission can be caused by either spurious timeout (STO) or spurious fast retransmit (SFR). The "lost" packets are unnecessarily retransmitted and the evoked congestion control process causes network underutilization. In this paper, we focus on spurious retransmission detection. We first present a survey on some important and interesting spurious retransmission detection algorithms. Based on the insights obtained, we propose a novel yet simple detection algorithm called split-and-retransmit (SnR). SnR only requires a minor modification to the TCP sender while leaving the receiver intact. The key idea is to split the retransmitted packet into two smaller ones before retransmitting them. As the packet size is different, the ACK triggered will carry different ACK numbers. This allows the sender to easily distinguish between the original transmission and the retransmission of a packet without relying on, e.g., TCP options. We then compare our SnR with STODER, F-RTO and Newreno under both loss-free and lossy network environments. We show that our SnR is resilient to packet loss and yields good performance under various simulation settings. ©2010 IEEE.published_or_final_versionThe 2010 IEEE Wireless Communications and Networking Conference (WCNC), Sydney, Australia, 18-21 April 2010. In Proceedings of WCNC, 2010, p. 1-

    Enhancing wireless TCP a serialized-timer approach

    Get PDF
    IEEE INFOCOM Proceedings, 2010, p. 1-5In wireless networks, TCP performs unsatisfactorily since packet reordering and random losses may be falsely interpreted as congestive losses. This causes TCP to trigger fast retransmission and fast recovery spuriously, leading to under-utilization of available network resources. In this paper, we propose a novel TCP variant, known as TCP for noncongestive loss (TCP-NCL), to adapt TCP to wireless networks by using more reliable signals of packet loss and network overload for activating packet retransmission and congestion response, separately. TCP-NCL can thus serve as a unified solution for effective congestion control, sequencing control, and loss recovery. Different from the existing unified solutions, the modifications involved in the proposed variant are limited to sender-side TCP only, thereby facilitating possible future wide deployment. The two signals employed are the expirations of two serialized timers. A smart TCP sender model has been developed for optimizing the timer expiration periods. Our simulation studies reveal that TCP-NCL is robust against packet reordering as well as random packet loss while maintaining responsiveness against situations with purely congestive loss. ©2010 IEEE.published_or_final_versio

    Making TCP More Robust to Long Connectivity Disruptions (TCP-LCD)

    Get PDF
    Disruptions in end-to-end path connectivity, which last longer than one retransmission timeout, cause suboptimal TCP performance. The reason for this performance degradation is that TCP interprets segment loss induced by long connectivity disruptions as a sign of congestion, resulting in repeated retransmission timer backoffs. This, in turn, leads to a delayed detection of the re-establishment of the connection since TCP waits for the next retransmission timeout before it attempts a retransmission. This document proposes an algorithm to make TCP more robust to long connectivity disruptions (TCP-LCD). It describes how standard ICMP messages can be exploited during timeout-based loss recovery to disambiguate true congestion loss from non-congestion loss caused by connectivity disruptions. Moreover, a reversion strategy of the retransmission timer is specified that enables a more prompt detection of whether or not the connectivity to a previously disconnected peer node has been restored. TCP-LCD is a TCP senderonly modification that effectively improves TCP performance in the case of connectivity disruptions. Status of This Memo This document is not an Internet Standards Track specification; it is published for examination, experimental implementation, and evaluation. This document defines an Experimental Protocol for the Internet community. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained a

    Spurious TCP Timeouts in 802.11 Networks

    Get PDF
    In this paper, we investigate spurious TCP timeouts in 802.11 wireless networks. Though timeouts can be a problem for uploads from an 802.11 network, these timeouts are not spurious but are caused by a bottleneck at the access point. Once this bottleneck is removed, we find that spurious timeouts are rare, even in the face of large changes in numbers of active stations or PHY rate

    MMPTCP: a multipath transport protocol for data centers

    Get PDF
    Modern data centres provide large aggregate network capacity and multiple paths among servers. Traffic is very diverse; most of the data is produced by long, bandwidth hungry flows but the large majority of flows, which commonly come with strict deadlines regarding their completion time, are short. It has been shown that TCP is not efficient for any of these types of traffic in modern data centres. More recent protocols such MultiPath TCP (MPTCP) are very efficient for long flows, but are ill-suited for short flows. In this paper, we present MMPTCP, a novel transport protocol which, compared to TCP and MPTCP, reduces short flows' completion times, while providing excellent goodput to long flows. MMPTCP runs in two phases; initially, it randomly scatters packets in the network under a single congestion window exploiting all available paths. This is beneficial to latency-sensitive flows. After a specific amount of data is sent, MMPTCP switches to a regular MPTCP mode. MMPTCP is incrementally deployable in existing data centres as it does not require any modifications outside the transport layer and behaves well when competing with legacy TCP and MPTCP flows. Our extensive experimental evaluation shows that all design objectives for MMPTCP are met
    corecore