91 research outputs found

    Design and analysis of a scalable terabit multicast packet switch : architecture and scheduling algorithms

    Get PDF
    Internet growth and success not only open a primary route of information exchange for millions of people around the world, but also create unprecedented demand for core network capacity. Existing switches/routers, due to the bottleneck from either switch architecture or arbitration complexity, can reach a capacity on the order of gigabits per second, but few of them are scalable to large capacity of terabits per second. In this dissertation, we propose three novel switch architectures with cooperated scheduling algorithms to design a terabit backbone switch/router which is able to deliver large capacity, multicasting, and high performance along with Quality of Service (QoS). Our switch designs benefit from unique features of modular switch architecture and distributed resource allocation scheme. Switch I is a unique and modular design characterized by input and output link sharing. Link sharing resolves output contention and eliminates speedup requirement for central switch fabric. Hence, the switch architecture is scalable to any large size. We propose a distributed round robin (RR) scheduling algorithm which provides fairness and has very low arbitration complexity. Switch I can achieve good performance under uniform traffic. However, Switch I does not perform well for non-uniform traffic. Switch II, as a modified switch design, employs link sharing as well as a token ring to pursue a solution to overcome the drawback of Switch 1. We propose a round robin prioritized link reservation (RR+POLR) algorithm which results in an improved performance especially under non-uniform traffic. However, RR+POLR algorithm is not flexible enough to adapt to the input traffic. In Switch II, the link reservation rate has a great impact on switch performance. Finally, Switch III is proposed as an enhanced switch design using link sharing and dual round robin rings. Packet forwarding is based on link reservation. We propose a queue occupancy based dynamic link reservation (QOBDLR) algorithm which can adapt to the input traffic to provide a fast and fair link resource allocation. QOBDLR algorithm is a distributed resource allocation scheme in the sense that dynamic link reservation is carried out according to local available information. Arbitration complexity is very low. Compared to the output queued (OQ) switch which is known to offer the best performance under any traffic pattern, Switch III not only achieves performance as good as the OQ switch, but also overcomes speedup problem which seriously limits the OQ switch to be a scalable switch design. Hence, Switch III would be a good choice for high performance, scalable, large-capacity core switches

    Performance study of multirate circuit switching in quantized clos network.

    Get PDF
    by Vincent Wing-Shing Tse.Thesis submitted in: December 1997.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 62-[64]).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Principles of Multirate Circuit Switching in Quantized Clos Network --- p.10Chapter 2.1 --- Formulation of Multirate Circuit Switching --- p.11Chapter 2.2 --- Call Level Routing in Quantized Clos Network --- p.12Chapter 2.3 --- Cell Level Routing in Quantized Clos Network --- p.16Chapter 2.3.1 --- Traffic Behavior in ATM Network --- p.17Chapter 2.3.2 --- Time Division Multiplexing in Multirate Circuit Switching and Cell-level Switching in ATM Network --- p.19Chapter 2.3.3 --- Cell Transmission Scheduling --- p.20Chapter 2.3.4 --- Capacity Allocation and Route Assignment at Cell-level --- p.29Chapter 3 --- Performance Evaluation of Different Implementation Schemes --- p.31Chapter 3.1 --- Global Control and Distributed Switching --- p.32Chapter 3.2 --- Implementation Schemes of Quantized Clos Network --- p.33Chapter 3.2.1 --- Classification of Switch Modules --- p.33Chapter 3.2.2 --- Bufferless Switch Modules Construction Scheme --- p.38Chapter 3.2.3 --- Buffered Switch Modules Construction Scheme --- p.42Chapter 3.3 --- Complexity Comparison --- p.44Chapter 3.4 --- Delay Performance of The Two Implementation Schemes --- p.47Chapter 3.4.1 --- Assumption --- p.47Chapter 3.4.2 --- Simulation Result --- p.50Chapter 4 --- Conclusions --- p.59Bibliography --- p.6

    Performance analysis of virtual path over large-scale ATM switches.

    Get PDF
    by Tang Oo.Thesis submitted in: December 1997.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 68-[75]).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- The Concept of Cross-Path Switching --- p.8Chapter 1.3 --- Contribution and Organization of Thesis --- p.12Chapter 2 --- The Virtual Path Scheduling Scheme --- p.14Chapter 2.1 --- The Trade-off Between Throughput and Concentration Loss --- p.14Chapter 2.2 --- Partition of Virtual Paths --- p.19Chapter 2.3 --- The Capacity and Route Assignment of Virtual Paths --- p.21Chapter 3 --- Performance Analysis and Simulation Results --- p.28Chapter 3.1 --- The Improvement of Concentration Loss --- p.28Chapter 3.2 --- The Throughput with Look-ahead Scheme --- p.30Chapter 3.3 --- The Throughput with Input Smoothing Scheme --- p.34Chapter 3.4 --- The Throughput with Bursty Source --- p.37Chapter 3.5 --- Buffer Dimensioning and The Cell Loss Probability Due to Buffer Overflow --- p.38Chapter 4 --- Capacity Assignment and Evaluation of Multiplexing Gain --- p.47Chapter 4.1 --- Principle of Capacity Assignment --- p.47Chapter 4.2 --- The Model of Virtual Path --- p.49Chapter 4.3 --- Capacity Assignment for CBR Service --- p.51Chapter 4.4 --- Capacity Assignment for Real-time VBR Service --- p.53Chapter 4.5 --- Capacity Assignment for Non Real-time VBR Service --- p.55Chapter 4.6 --- Capacity Matrix --- p.56Chapter 4.7 --- The Evaluation of Multiplexing Gain of Input Stage --- p.58Chapter 5 --- Discussions and Conclusions --- p.64Bibliography --- p.6

    Scheduling algorithms for high-speed switches

    Get PDF
    The virtual output queued (VOQ) switching architecture was adopted for high speed switch implementation owing to its scalability and high throughput. An ideal VOQ algorithm should provide Quality of Service (QoS) with low complexity. However, none of the existing algorithms can meet these requirements. Several algorithms for VOQ switches are introduced in this dissertation in order to improve upon existing algorithms in terms of implementation or QoS features. Initially, the earliest due date first matching (EDDFM) algorithm, which is stable for both uniform and non-uniform traffic patterns, is proposed. EDDFM has lower probability of cell overdue than other existing maximum weight matching algorithms. Then, the shadow departure time algorithm (SDTA) and iterative SDTA (ISDTA) are introduced. The QoS features of SDTA and ISDTA are better than other existing algorithms with the same computational complexity. Simulations show that the performance of a VOQ switch using ISDTA with a speedup of 1.5 is similar to that of an output queued (OQ) switch in terms of cell delay and throughput. Later, the enhanced Birkhoff-von Neumann decomposition (EBVND) algorithm based on the Birkhoff-von Neumann decomposition (BVND) algorithm, which can provide rate and cell delay guarantees, is introduced. Theoretical analysis shows that the performance of EBVND is better than BVND in terms of throughput and cell delay. Finally, the maximum credit first (MCF), the Enhanced MCF (EMCF), and the iterative MCF (IMCF) algorithms are presented. These new algorithms have the similar performance as BNVD, yet are easier to implement in practice

    On packet switch design

    Get PDF

    Time-Synchronized Optical Burst Switching

    Get PDF
    Optical Burst Switching was recently introduced as a protocol for the next generation optical Wavelength Division Multiplexing (WDM) network. Currently, in legacy Optical Circuit Switching over the WDM network, the highest bandwidth utilization cannot be achieved over the network. Because of its physical complexities and many technical obstacles, the lack of an optical buffer and the inefficiency of optical processing, Optical Packet Switching is difficult to implement. Optical Burst Switching (OBS) is introduced as a compromised solution between Optical Circuit Switching and Optical Packet Switching. It is designed to solve the problems and support the unique characteristics of an optical-based network. Since OBS works based on all-optical switching techniques, two major challenges in designing an effective OBS system have to be taken in consideration. One of the challenges is the cost and complexities of implementation, and another is the performance of the system in terms of blocking probabilities. This research proposes a variation of Optical Burst Switching called Time-Synchronized Optical Burst Switching. Time-Synchronized Optical Burst Switching employs a synchronized timeslot-based mechanism that allows a less complex physical switching fabric to be implemented, as well as to provide an opportunity to achieve better resource utilization in the network compared to the traditional Optical Burst Switching

    Switching techniques for broadband ISDN

    Get PDF
    The properties of switching techniques suitable for use in broadband networks have been investigated. Methods for evaluating the performance of such switches have been reviewed. A notation has been introduced to describe a class of binary self-routing networks. Hence a technique has been developed for determining the nature of the equivalence between two networks drawn from this class. The necessary and sufficient condition for two packets not to collide in a binary self-routing network has been obtained. This has been used to prove the non-blocking property of the Batcher-banyan switch. A condition for a three-stage network with channel grouping and link speed-up to be nonblocking has been obtained, of which previous conditions are special cases. A new three-stage switch architecture has been proposed, based upon a novel cell-level algorithm for path allocation in the intermediate stage of the switch. The algorithm is suited to hardware implementation using parallelism to achieve a very short execution time. An array of processors is required to implement the algorithm The processor has been shown to be of simple design. It must be initialised with a count representing the number of cells requesting a given output module. A fast method has been described for performing the request counting using a non-blocking binary self-routing network. Hardware is also required to forward routing tags from the processors to the appropriate data cells, when they have been allocated a path through the intermediate stage. A method of distributing these routing tags by means of a non-blocking copy network has been presented. The performance of the new path allocation algorithm has been determined by simulation. The rate of cell loss can increase substantially in a three-stage switch when the output modules are non-uniformly loaded. It has been shown that the appropriate use of channel grouping in the intermediate stage of the switch can reduce the effect of non-uniform loading on performance
    corecore