1,582 research outputs found

    TCP/IP traffic over ATM network with ABR flow and congestion control

    Get PDF
    Most traffics over the existing ATM network are generated by applications running over TCP/IP protocol stack. In the near future, the success of ATM technology will depend largely on how well it supports the huge legacy of existing TCP/IP applications. In this thesis, we study and compare the performance of TCP/IP traffic running on different rate based ABR flow control algorithms such as EFCI, ERICA and FMMRA by extensive simulations. Infinite source-end traffic behavior is chosen to represent, FTP application running on TCP/IP. Background VBR traffic with different ON-OFF frequency is introduced to produce transient network states as well as congestion. The simulations produce many insights on issues such as: ABR queue length in congested ATM switch, source-end ACR (Allowed Cell Rate), link utilization at congestion point, efficient end to end TCP throughput, the TCP congestion control window size, and the TCP round trip time. Based on the simulation results, zero cell loss switch buffer requirement of the three algorithms are compared, and the fairness of ABR bandwidth allocation among TCP connections are analyzed. The interaction between the TCP layer and the ATM layer flow and congestion control mechanism is analyzed. Our simulation results show that in order to get a good TCP throughput and affordable switch buffer requirement, some kind of switch queue length monitoring and control mechanism is necessary in the ABR. congestion algorithm

    Explicit congestion control algorithms for available bit rate services in asynchronous transfer mode networks

    Get PDF
    Congestion control of available bit rate (ABR) services in asynchronous transfer mode (ATM) networks has been the recent focus of the ATM Forum. The focus of this dissertation is to study the impact of queueing disciplines on ABR service congestion control, and to develop an explicit rate control algorithm. Two queueing disciplines, namely, First-In-First-Out (FIFO) and per-VC (virtual connection) queueing, are examined. Performance in terms of fairness, throughput, cell loss rate, buffer size and network utilization are benchmarked via extensive simulations. Implementation complexity analysis and trade-offs associated with each queueing implementation are addressed. Contrary to the common belief, our investigation demonstrates that per-VC queueing, which is costlier and more complex, does not necessarily provide any significant improvement over simple FIFO queueing. A new ATM switch algorithm is proposed to complement the ABR congestion control standard. The algorithm is designed to work with the rate-based congestion control framework recently recommended by the ATM Forum for ABR services. The algorithm\u27s primary merits are fast convergence, high throughput, high link utilization, and small buffer requirements. Mathematical analysis is done to show that the algorithm converges to the max-min fair allocation rates in finite time, and the convergence time is proportional to the distinct number of fair allocations and the round-trip delays in the network. At the steady state, the algorithm operates without causing any oscillations in rates. The algorithm does not require any parameter tuning, and proves to be very robust in a large ATM network. The impact of ATM switching and ATM layer congestion control on the performance of TCP/IP traffic is studied and the results are presented. The study shows that ATM layer congestion control improves the performance of TCP/IP traffic over ATM, and implementing the proposed switch algorithm drastically reduces the required switch buffer requirements. In order to validate claims, many benchmark ATM networks are simulated, and the performance of the switch is evaluated in terms of fairness, link utilization, response time, and buffer size requirements. In terms of performance and complexity, the algorithm proposed here offers many advantages over other proposed algorithms in the literature

    Joint buffer management and scheduling for input queued switches

    Get PDF
    Input queued (IQ) switches are highly scalable and they have been the focus of many studies from academia and industry. Many scheduling algorithms have been proposed for IQ switches. However, they do not consider the buffer space requirement inside an IQ switch that may render the scheduling algorithms inefficient in practical applications. In this dissertation, the Queue Length Proportional (QLP) algorithm is proposed for IQ switches. QLP considers both the buffer management and the scheduling mechanism to obtain the optimal allocation region for both bandwidth and buffer space according to real traffic load. In addition, this dissertation introduces the Queue Proportional Fairness (QPF) criterion, which employs the cell loss ratio as the fairness metric. The research in this dissertation will show that the utilization of network resources will be improved significantly with QPF. Furthermore, to support diverse Quality of Service (QoS) requirements of heterogeneous and bursty traffic, the Weighted Minmax algorithm (WMinmax) is proposed to efficiently and dynamically allocate network resources. Lastly, to support traffic with multiple priorities and also to handle the decouple problem in practice, this dissertation introduces the multiple dimension scheduling algorithm which aims to find the optimal scheduling region in the multiple Euclidean space

    Adaptive Multicast of Multi-Layered Video: Rate-Based and Credit-Based Approaches

    Full text link
    Network architectures that can efficiently transport high quality, multicast video are rapidly becoming a basic requirement of emerging multimedia applications. The main problem complicating multicast video transport is variation in network bandwidth constraints. An attractive solution to this problem is to use an adaptive, multi-layered video encoding mechanism. In this paper, we consider two such mechanisms for the support of video multicast; one is a rate-based mechanism that relies on explicit rate congestion feedback from the network, and the other is a credit-based mechanism that relies on hop-by-hop congestion feedback. The responsiveness, bandwidth utilization, scalability and fairness of the two mechanisms are evaluated through simulations. Results suggest that while the two mechanisms exhibit performance trade-offs, both are capable of providing a high quality video service in the presence of varying bandwidth constraints.Comment: 11 page

    Traffic Control in a Synchronous Transfer Mode Networks

    Get PDF
    In the 90s, there is an increasing demand for new telecommunication services such as video conferencing, videophone, broadcast television, image transfer and bulk file transfe r etc. At the same time, transmission systems at bit rates of 2.5 Gb/s are now being installed, and the expected next generation of 10 Gb/s systems is emerging from the research laboratories. Coupled with that the development and deployment of new technologies systems such as fiber optics and intelligent high-speed switches have made it possible to provide these services in future high-speed integrated services networks like Asynchronous Transfer Mode (ATM). However, because of their new characteristics, these new services pose great challenges not previously encountered in traditional circuitswitche d or packet switched networks. For example, feature s such as large propagation delay as compared to transmission delay, diverse application demands, constraints on call processing capacity, and Quality-Of-Service (QOS) support for different applications all present new challenges arising from the new technology and new applications. Thus, much research is needed not just to improve existing technologies, but to seek a fundamentally different approach toward network architectures and protocols. In particular, new bandwidth allocation and call admission control algorithms need to be studied to meet these new challenges. A VP bandwidth allocation problem is studied for services which requires guaranteed connection for a fixed duration of time leading to extensive use of facilities like reservations of transmission capacity in advance. In such a case, the network may offer discounts for users reserving capacities in advance due to the advantage of working with predetermined traffic loads. Similarly, charges may differ for customers wanting to book capacity for a specified tie interval. Based on this scenario, various charge classes and booking policies are introduced. An effective bandwidth allocation scheme is proposed at the VP level with multiple nested charge classes where these various classes are allocated bandwidth optimally through some booking policies'. The scheme is also shown to be effective in maximizing network revenue. The best tradeoff between revenue gained through greater demand for discount bandwidth units against revenue lost when full-charge bookings request must be turned away because of prior bookings of discount bandwidth units is also sought for

    Quality of service over ATM networks

    Get PDF
    PhDAbstract not availabl

    Design and analysis of a scalable terabit multicast packet switch : architecture and scheduling algorithms

    Get PDF
    Internet growth and success not only open a primary route of information exchange for millions of people around the world, but also create unprecedented demand for core network capacity. Existing switches/routers, due to the bottleneck from either switch architecture or arbitration complexity, can reach a capacity on the order of gigabits per second, but few of them are scalable to large capacity of terabits per second. In this dissertation, we propose three novel switch architectures with cooperated scheduling algorithms to design a terabit backbone switch/router which is able to deliver large capacity, multicasting, and high performance along with Quality of Service (QoS). Our switch designs benefit from unique features of modular switch architecture and distributed resource allocation scheme. Switch I is a unique and modular design characterized by input and output link sharing. Link sharing resolves output contention and eliminates speedup requirement for central switch fabric. Hence, the switch architecture is scalable to any large size. We propose a distributed round robin (RR) scheduling algorithm which provides fairness and has very low arbitration complexity. Switch I can achieve good performance under uniform traffic. However, Switch I does not perform well for non-uniform traffic. Switch II, as a modified switch design, employs link sharing as well as a token ring to pursue a solution to overcome the drawback of Switch 1. We propose a round robin prioritized link reservation (RR+POLR) algorithm which results in an improved performance especially under non-uniform traffic. However, RR+POLR algorithm is not flexible enough to adapt to the input traffic. In Switch II, the link reservation rate has a great impact on switch performance. Finally, Switch III is proposed as an enhanced switch design using link sharing and dual round robin rings. Packet forwarding is based on link reservation. We propose a queue occupancy based dynamic link reservation (QOBDLR) algorithm which can adapt to the input traffic to provide a fast and fair link resource allocation. QOBDLR algorithm is a distributed resource allocation scheme in the sense that dynamic link reservation is carried out according to local available information. Arbitration complexity is very low. Compared to the output queued (OQ) switch which is known to offer the best performance under any traffic pattern, Switch III not only achieves performance as good as the OQ switch, but also overcomes speedup problem which seriously limits the OQ switch to be a scalable switch design. Hence, Switch III would be a good choice for high performance, scalable, large-capacity core switches

    Dynamic bandwidth allocation in ATM networks

    Get PDF
    Includes bibliographical references.This thesis investigates bandwidth allocation methodologies to transport new emerging bursty traffic types in ATM networks. However, existing ATM traffic management solutions are not readily able to handle the inevitable problem of congestion as result of the bursty traffic from the new emerging services. This research basically addresses bandwidth allocation issues for bursty traffic by proposing and exploring the concept of dynamic bandwidth allocation and comparing it to the traditional static bandwidth allocation schemes

    Applications of satellite technology to broadband ISDN networks

    Get PDF
    Two satellite architectures for delivering broadband integrated services digital network (B-ISDN) service are evaluated. The first is assumed integral to an existing terrestrial network, and provides complementary services such as interconnects to remote nodes as well as high-rate multicast and broadcast service. The interconnects are at a 155 Mbs rate and are shown as being met with a nonregenerative multibeam satellite having 10-1.5 degree spots. The second satellite architecture focuses on providing private B-ISDN networks as well as acting as a gateway to the public network. This is conceived as being provided by a regenerative multibeam satellite with on-board ATM (asynchronous transfer mode) processing payload. With up to 800 Mbs offered, higher satellite EIRP is required. This is accomplished with 12-0.4 degree hopping beams, covering a total of 110 dwell positions. It is estimated the space segment capital cost for architecture one would be about 190Mwhereasthesecondarchitecturewouldbeabout190M whereas the second architecture would be about 250M. The net user cost is given for a variety of scenarios, but the cost for 155 Mbs services is shown to be about $15-22/minute for 25 percent system utilization
    corecore