19 research outputs found
A lighter UDP
With the advent of IP telephony and other real time multimedia applications running atop new wireless networks, new problems arise for the traditional Internet stack of protocols. Network and transport protocol support for these new applications has not yet been fully developed and optimized. Traditional UDP/IP support, especially in IPv6, provides a service that can throw away whole frames of user data (i.e., whole voice packets) when single bit errors are detected. Wireless networks make this last problem even worse since single bit errors can occur frequently. This work presents a new protocol, UDP Lite, that provides better support for real time multimedia mobile applications. UDP Lite provides a more flexible checksumming policy to the user so that the application decides what to do with user data (e.g., voice and video packets) that contain a small number of bit errors. UDP Lite allows the user to decide to keep as much of the data as he likes while still protecting the sensitive header fields. Beside presenting the protocol, the thesis present traffic studies, simulation results and architectural requirements from other parts in the protocol stack in order to use wireless bandwidth efficiently for realtime communication.Godkänd; 1999; 20070320 (ysko
A Bandwidth Study of a DHT in a Heterogeneous Environment
We present a NS-2 implementation of a distributed hash table (DHT) modeled after Bamboo. NS-2 is used to evaluate the bandwidth costs involved in using a DHT in heterogeneous environments. Networks are modeled as mixed networks of desktop machines and 3G cellphones. We also document the modifications of NS-2 that were needed to simulate churn in large networks
High-performance longest prefix matching supporting high-speed incremental updates and guaranteed compression
Longest prefix matching is frequently used for IP forwarding in the Internet. Data structures used must be not only efficient, hut also robust against pathological entries caused by an adversary or misconfiguration. In this paper, we attack the longest prefix matching problem by presenting a new algorithm supporting high lookup performance, fast incremental updates and guaranteed compression ratio. High lookup performance is achieved by using only four memory accesses. Guaranteed compression ratio is achieved by combining direct indexing with an implicit tree structure and carefully choosing which construct to use when updating the forwarding table. Fast incremental updates are achieved by a new memory management technique featuring fast variable size allocation and deallocation while maintaining zero fragmentation. An IPv4 forwarding table data structure can be implemented in software or hardware within 2.7 Mb of memory to represent 2/sup 18/ routing entries. Incremental updates require only 752 memory accesses in worst case for the current guaranteed compression ratio. For a hardware implementation, we can use 300 MHz SRAM organized in four memory banks and four pipeline stages to achieve a guaranteed performance of 300 million lookups per second, corresponding to /spl sim/ 100 Gbit/s wire speed forwarding, and 400,000 incremental updates per second. In measurements performed on a 3.0 GHz Pentium 4 machine using a routing table with more than 2/sup 17/ entries, we can forward over 27 million IPv4 packets per second, which is equivalent to wire speeds exceeding 10 Gbit/s. On the same machine and with the same routing table, we can perform over 230,000 incremental updates/second.Validerad; 2005; 20061227 (ysko
A Bandwidth Study of a DHT in a Heterogeneous Environment
We present a NS-2 implementation of a distributed hash table (DHT) modeled after Bamboo. NS-2 is used to evaluate the bandwidth costs involved in using a DHT in heterogeneous environments. Networks are modeled as mixed networks of desktop machines and 3G cellphones. We also document the modifications of NS-2 that were needed to simulate churn in large networks
Reducing the TCP acknowledgment frequency
Delayed acknowledgments were introduced to conserve network and host resources. Further reduction of the acknowledgment frequency can be motivated in the same way. However, reducing the dependency on frequent acknowledgments in TCP is difficult because acknowledgments support reliable delivery; loss recovery; clock out new segments, and serve as input when determining an appropriate sending rate. Our results show that in scenarios where there are no obvious advantages of reducing the acknowledgment frequency, performance can be maintained although fewer acknowledgments are sent. Hence; there is a potential for reducing the acknowledgment frequency more than is done through delayed acknowledgments today. Advancements in TCP loss recovery is one of the key reasons that the dependence on frequent acknowledgments has decreased.We propose and evaluate an end-to-end solution, where four acknowledgments per send window are sent. The sender compensates for the reduced acknowledgment frequency using a form of Appropriate Byte Counting. The proposal also includes a modification of fast loss recovery to avoid frequent timeouts.Validerad; 2007; 20071126 (pafi
Revisiting wireless link layers and in-order delivery
Wireless link layers, which perform retransmissions to hide transmission errors from upper layers, enforce inorder delivery to avoid triggering TCP's congestion control mechanisms. However, new reordering robust TCP flavors make it possible to revisit the design of these link layers. In this paper, we study the effects of the link layer configuration (in-order vs out-of-order) on network layer buffering and transport layer smoothness in a WWAN scenario through simulations. We use a standards-compliant TCP, TCP-Aix, and TCP-NCR. The results show that smoothness is improved and the buffer requirement is reduced when out-of-order delivery is allowed.Godkänd; 2008; 20080408 (saral
Congestion control in a high-speed radio environment
This paper explores interactions between congestion control mechanisms at the transport layer and scheduling algorithms at the physical layer in the High-Speed Down-link Packet Access extension to WCDMA. Two different approaches to congestion control - TCP SACK and TFRC - are studied. We find that TCP SACK and TFRC in most respects perform the same way. SIR scheduling give a higher system throughput for both protocols than RR scheduling, but introduces delay variations that lead to spurious timeouts. The no feedback timeout of TFRC was shown to exhibit a similar sensitivity to delay spikes as the retransmit timeout in TCP SACKGodkänd; 2004; 20071026 (saral
Buffer management for TCP over HS-DSCH
In this paper we investigate the influence of buffer management for TCP on performance of the High Speed Downlink Channel (HS-DSCH) introduced in WCDMA release 5. HS-DSCH is a shared channel, but user data is buffered individually prior to the wireless link. Three queue management principles, e.g., passive queuing, the Packet Discard Prevention Counter (PDPC) method and the Random Early Detection (RED) algorithm were evaluated for a number of buffer sizes and scenarios. Also, a buffer large enough to prevent packets from being lost was included for reference. For round robin (RR) scheduling of radio-blocks, PDPC and the passive approach, that both manage to keep the buffer short, gave the best system goodput as well as the shortest average transfer times together with the excessively large buffer. With signal-to-interference ratio (SIR) scheduling, the strategy to avoid all packet losses, resulted in a lower system goodput than for the short buffers. As illustrated in this article, peak transfer rates may not be achieved with very small buffers, but buffers of 10-15 IP packets seem to represent a good trade-off between transfer rates, delay and system goodput. We would like to investigate how to make use of system parameters such as the current amount of data offered for HS-DSCH in total to regulate individual buffer sizes.Godkänd; 2005; 20071026 (saral