34 research outputs found
Satellite ATM Network Architectural Considerations and TCP/IP Performance
In this paper, we have provided a summary of the design options in
Satellite-ATM technology. A satellite ATM network consists of a space segment
of satellites connected by inter-satellite crosslinks, and a ground segment of
the various ATM networks. A satellite-ATM interface module connects the
satellite network to the ATM networks and performs various call and control
functions. A network control center performs various network management and
resource allocation functions. Several issues such as the ATM service model,
media access protocols, and traffic management issues must be considered when
designing a satellite ATM network to effectively transport Internet traffic. We
have presented the buffer requirements for TCP/IP traffic over ATM-UBR for
satellite latencies. Our results are based on TCP with selective
acknowledgments and a per-VC buffer management policy at the switches. A buffer
size of about 0.5 * RTT to 1 * RTT is sufficient to provide over 98% throughput
to infinite TCP traffic for long latency networks and a large number of
sources. This buffer requirement is independent of the number of sources. The
fairness is high for a large numbers of sources because of the per-VC buffer
management performed at the switches and the nature of TCP traffic.Comment: Proceedings of the 3rd Ka Band Utilization Converence, Italy, 1997,
pp481-48
Congestion control for transmission control protocol (TCP) over asynchronous transfer mode (ATM) networks
Performance of Transmission Control Protocol (TCP) connections in high-speed Asynchronous Transfer Model (ATM) networks is of great importance due to the widespread use of the TCP/IP protocol for data transfers and the increasing deployment of ATM networks. When TCP runs on top of ATM network, the TCP window based and ATM rate based congestion control mechanisms interact with each other. TCP performance may be degraded by the mismatch between the two mechanisms. We study the TCP performance over ATM networks with Unspecified Bit Rate (UBR) service and Available Bit Rate (ABR) service under various congestion control mechanisms by using simulation techniques, and propose a novel congestion control algorith, "Fair Intelligent Congestion Control", which significantly enhances the congestion control efficiency and improves the TCP performance over ATM networks
Recommended from our members
TCP performance in ATM networks: ABR parameter tuning and ABR/UBR comparisons
This paper explores two issues on TOP performance over ATM networks: ABR parameter tuning and performance comparison of binary mode ABR with enhanced UBR services. Of the fifteen parameters defined for ABR, two parameters dominate binary mode ABR performance: Rate Increase Factor (RIF) and Rate Decrease Factor (RDF). Using simulations, we study the effects of these two parameters on TOP over ABR performance. We compare TOP performance with different ABR parameter settings in terms of through-puts and fairness. The effects of different buffer sizes and LAN/WAN distances are also examined. We then compare TOP performance with the best ABR parameter setting with corresponding UBR service enhanced with Early Packet Discard and also with a fair buffer allocation scheme. The results show that TOP performance over binary mode ABR is very sensitive to parameter value settings, and that a poor choice of parameters can result in ABR performance worse than that of the much less expensive UBR-EPD scheme
The OSU Scheme for Congestion Avoidance in ATM Networks: Lessons Learnt and Extensions
The OSU scheme is a rate-based congestion avoidance scheme for ATM networks
using explicit rate indication. This work was one of the first attempts to
define explicit rate switch mechanisms and the Resource Management (RM) cell
format in Asynchronous Transfer Mode (ATM) networks. The key features of the
scheme include explicit rate feedback, congestion avoidance, fair operation
while maintaining high utilization, use of input rate as a congestion metric,
O(1) complexity. This paper presents an overview of the scheme, presents those
features of the scheme that have now become common features of other switch
algorithms and discusses three extensions of the scheme
Design and analysis of flow control algorithms for data networks
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (leaves 110-112).by Paolo L. Naváez Guarnieri.M.S
Resource management for multimedia traffic over ATM broadband satellite networks
PhDAbstract not availabl
Congestion Control by Bandwidth-Delay Tradeoff in Very High-Speed Networks: The Case of Window-Based Control
Increasing bandwidth-delay product of high-speed wide-area networks is well-known to make conventional dynamic traffic control schemes sluggish . Still, most existing schemes employ dynamic control, among which TCP and ATM Forum\u27s rate-based flow control are prominent examples. So far, little has been investigated as to how the existing schemes will scale as bandwidth further increases up to gigabit speed and beyond. Our investigation in this paper is the first to show that dynamic control has a severe scalability problem with bandwidth increase, and to propose an entirely new approach to traffic control that overcomes the scalability problem. The essence of our approach is in exercising control in bandwidth domain rather than time domain, in order to avoid time delay in control. This requires more bandwidth than the timed counterpart, but achieves a much faster control. Furthermore, the bandwidth requirement is not excessively large because the bandwidth for smaller control delay and we call our approach Bandwidth-Latency Tradeoff (BLT). While the control in existing schemes are bound to delay, BLT is bound to bandwidth. As a fallout, BLT scales tied to bandwidth increase, rather than increasingly deteriorate as conventional schemes. Surprisingly, our approach begins to pay off much earlier than expected, even from a point where bandwidth-delay product is not so large. For instance, in a roughly AURORA-sized network, BLT far outperforms TCP on a shared 150Mbps link, where the bandwidth-delay product is around 60KB. In the other extreme where bandwidth-delay product is large, BLT outperforms TCP by as much as twenty times in terms of network power in a gigabit nationwide network. More importantly, BLT is designed to continue to scale with bandwidth increase and the performance gap is expected to widen further