12,154 research outputs found

    TCP Non-Renegable Selective Acknowledgments (NR-SACKs) and benefits for space and satellite communications

    Get PDF
    TCP is designed to tolerate reneging. This design has been challenged since (i) reneging rarely occurs in practice, and (ii) even when reneging does occur, it alone generally does not help the operating system resume normal operation when the system is starving for memory. We investigate how freeing received out-of-order PDUs from the send buffer by using Non-Renegable Selective Acknowledgments (NR-SACKs) can improve end-to-end performance. This improvement results when send buffer blocking occurs in TCP. Preliminary results for TCP NR-SACKs show that (i) TCP data transfers with NR-SACKs never perform worse than those without NR-SACKs, and (ii) NR-SACKs can improve end-to-end throughput when send buffer blocking occurs. Under certain circumstances, we observe throughput increasing by using TCP NR-SACKs as much as 15% and particularly over long-delay links such as GEO satellite links. The tradeoff for this potential gain is a change to the semantics of the TCP send buffer requiring the more complex management of non-contiguous PDUs. We investigate potential application performance gains when TCP receiver implements NR-SACKs and present empirical results on a real satellite link in the Centre National d’Études Spatiales (CNES) France’s agency responsible for shaping and implementing its space policy in Europe

    Understanding CHOKe: throughput and spatial characteristics

    Get PDF
    A recently proposed active queue management, CHOKe, is stateless, simple to implement, yet surprisingly effective in protecting TCP from UDP flows. We present an equilibrium model of TCP/CHOKe. We prove that, provided the number of TCP flows is large, the UDP bandwidth share peaks at (e+1)/sup -1/=0.269 when UDP input rate is slightly larger than link capacity, and drops to zero as UDP input rate tends to infinity. We clarify the spatial characteristics of the leaky buffer under CHOKe that produce this throughput behavior. Specifically, we prove that, as UDP input rate increases, even though the total number of UDP packets in the queue increases, their spatial distribution becomes more and more concentrated near the tail of the queue, and drops rapidly to zero toward the head of the queue. In stark contrast to a nonleaky FIFO buffer where UDP bandwidth shares would approach 1 as its input rate increases without bound, under CHOKe, UDP simultaneously maintains a large number of packets in the queue and receives a vanishingly small bandwidth share, the mechanism through which CHOKe protects TCP flows

    Satellite ATM Network Architectural Considerations and TCP/IP Performance

    Full text link
    In this paper, we have provided a summary of the design options in Satellite-ATM technology. A satellite ATM network consists of a space segment of satellites connected by inter-satellite crosslinks, and a ground segment of the various ATM networks. A satellite-ATM interface module connects the satellite network to the ATM networks and performs various call and control functions. A network control center performs various network management and resource allocation functions. Several issues such as the ATM service model, media access protocols, and traffic management issues must be considered when designing a satellite ATM network to effectively transport Internet traffic. We have presented the buffer requirements for TCP/IP traffic over ATM-UBR for satellite latencies. Our results are based on TCP with selective acknowledgments and a per-VC buffer management policy at the switches. A buffer size of about 0.5 * RTT to 1 * RTT is sufficient to provide over 98% throughput to infinite TCP traffic for long latency networks and a large number of sources. This buffer requirement is independent of the number of sources. The fairness is high for a large numbers of sources because of the per-VC buffer management performed at the switches and the nature of TCP traffic.Comment: Proceedings of the 3rd Ka Band Utilization Converence, Italy, 1997, pp481-48

    Fair Resource Allocation in Hybrid Network Gateways with Per-Flow Queueing

    Get PDF
    In this paper, we present an efficient resource allocation scheme forscheduling and buffer management in a bottleneck hybrid Internetgateway. We use Fair Queueing in conjunction with Probabilistic FairDrop, a new buffer management policy, to allocate bandwidth and bufferspace in the gateway to ensure that all TCP flows threading thegateway achieve high end-to-end throughput and fair service. Wepropose the use of buffer dimensioning to alleviate the inherent biasof the TCP algorithm towards connections with large Round Trip Timeand validate our scheme through simulations

    Full TCP/IP for 8-Bit architectures

    Get PDF
    We describe two small and portable TCP/IP implementations fulfilling the subset of RFC1122 requirements needed for full host-to-host interoperability. Our TCP/IP implementations do not sacrifice any of TCP's mechanisms such as urgent data or congestion control. They support IP fragment reassembly and the number of multiple simultaneous connections is limited only by the available RAM. Despite being small and simple, our implementations do not require their peers to have complex, full-size stacks, but can communicate with peers running a similarly light-weight stack. The code size is on the order of 10 kilobytes and RAM usage can be configured to be as low as a few hundred bytes

    Controlling Network Latency in Mixed Hadoop Clusters: Do We Need Active Queue Management?

    Get PDF
    With the advent of big data, data center applications are processing vast amounts of unstructured and semi-structured data, in parallel on large clusters, across hundreds to thousands of nodes. The highest performance for these batch big data workloads is achieved using expensive network equipment with large buffers, which accommodate bursts in network traffic and allocate bandwidth fairly even when the network is congested. Throughput-sensitive big data applications are, however, often executed in the same data center as latency-sensitive workloads. For both workloads to be supported well, the network must provide both maximum throughput and low latency. Progress has been made in this direction, as modern network switches support Active Queue Management (AQM) and Explicit Congestion Notifications (ECN), both mechanisms to control the level of queue occupancy, reducing the total network latency. This paper is the first study of the effect of Active Queue Management on both throughput and latency, in the context of Hadoop and the MapReduce programming model. We give a quantitative comparison of four different approaches for controlling buffer occupancy and latency: RED and CoDel, both standalone and also combined with ECN and DCTCP network protocol, and identify the AQM configurations that maintain Hadoop execution time gains from larger buffers within 5%, while reducing network packet latency caused by bufferbloat by up to 85%. Finally, we provide recommendations to administrators of Hadoop clusters as to how to improve latency without degrading the throughput of batch big data workloads.The research leading to these results has received funding from the European Unions Seventh Framework Programme (FP7/2007–2013) under grant agreement number 610456 (Euroserver). The research was also supported by the Ministry of Economy and Competitiveness of Spain under the contracts TIN2012-34557 and TIN2015-65316-P, Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), HiPEAC-3 Network of Excellence (ICT- 287759), and the Severo Ochoa Program (SEV-2011-00067) of the Spanish Government.Peer ReviewedPostprint (author's final draft
    • …
    corecore