277 research outputs found

    On-board B-ISDN fast packet switching architectures. Phase 2: Development. Proof-of-concept architecture definition report

    Get PDF
    For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices

    Implementation aspects of ATM switches

    Get PDF

    Design and analysis of a scalable terabit multicast packet switch : architecture and scheduling algorithms

    Get PDF
    Internet growth and success not only open a primary route of information exchange for millions of people around the world, but also create unprecedented demand for core network capacity. Existing switches/routers, due to the bottleneck from either switch architecture or arbitration complexity, can reach a capacity on the order of gigabits per second, but few of them are scalable to large capacity of terabits per second. In this dissertation, we propose three novel switch architectures with cooperated scheduling algorithms to design a terabit backbone switch/router which is able to deliver large capacity, multicasting, and high performance along with Quality of Service (QoS). Our switch designs benefit from unique features of modular switch architecture and distributed resource allocation scheme. Switch I is a unique and modular design characterized by input and output link sharing. Link sharing resolves output contention and eliminates speedup requirement for central switch fabric. Hence, the switch architecture is scalable to any large size. We propose a distributed round robin (RR) scheduling algorithm which provides fairness and has very low arbitration complexity. Switch I can achieve good performance under uniform traffic. However, Switch I does not perform well for non-uniform traffic. Switch II, as a modified switch design, employs link sharing as well as a token ring to pursue a solution to overcome the drawback of Switch 1. We propose a round robin prioritized link reservation (RR+POLR) algorithm which results in an improved performance especially under non-uniform traffic. However, RR+POLR algorithm is not flexible enough to adapt to the input traffic. In Switch II, the link reservation rate has a great impact on switch performance. Finally, Switch III is proposed as an enhanced switch design using link sharing and dual round robin rings. Packet forwarding is based on link reservation. We propose a queue occupancy based dynamic link reservation (QOBDLR) algorithm which can adapt to the input traffic to provide a fast and fair link resource allocation. QOBDLR algorithm is a distributed resource allocation scheme in the sense that dynamic link reservation is carried out according to local available information. Arbitration complexity is very low. Compared to the output queued (OQ) switch which is known to offer the best performance under any traffic pattern, Switch III not only achieves performance as good as the OQ switch, but also overcomes speedup problem which seriously limits the OQ switch to be a scalable switch design. Hence, Switch III would be a good choice for high performance, scalable, large-capacity core switches

    Saturation routing for asynchronous transfer mode (ATM) networks

    Get PDF
    The main objective of this thesis is to show that saturation routing, often in the past considered inefficient, can in fact be a viable approach to use in many important applications and services over an Asynchronous Transfer Mode (ATM) network. For other applications and services, a hybrid approach (one that partially uses saturation routing) is presented. First, the minimum effects of saturation routing are demonstrated by showing that the ratio, defined as f, of routing overhead cells over information cells is small even for large networks. Second, modeling and simulation and M/D/l queuing analysis techniques are used to show that the overall effect on performance when using saturation routing is not significant over ATM networks. Then saturation routing ATM implementation is also provided, with important extensions to services such as multicast routing. After an analytical comparison, in terms of routing overhead, is made between Saturation Routing and the currently proposed Private Network-Network Interface (PNNI) procedure for ATM routing made by the ATM forum. This comparison is made for networks of different sizes (343node and 2401 -node networks) and different number of hierarchical levels (3 and 4 levels of hierarchy). The results show that the higher the number of levels of hierarchy and the farthest (in terms of hierarchical levels) the source and the destination nodes are from each other, the more advantageous saturation routing becomes. Finally, a set of measures of performance for use by saturation routing (or any routing algorithm), as metrics for routing path selection, is proposed. Among these measures, an innovative new measure of performance derived for measuring quality of service provided to Constant Bit Rate (CBR) users (e.g., such as voice and video users) called the Burst Voice Arrival Lag (BVAL) is described and derived

    Fabric-on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon

    Get PDF
    The switching capacity of an Internet router is often dictated by the memory bandwidth required to bu¤er arriving packets. With the demand for greater capacity and improved service provisioning, inherent memory bandwidth limitations are encountered rendering input queued (IQ) switches and combined input and output queued (CIOQ) architectures more practical. Output-queued (OQ) switches, on the other hand, offer several highly desirable performance characteristics, including minimal average packet delay, controllable Quality of Service (QoS) provisioning and work-conservation under any admissible traffic conditions. However, the memory bandwidth requirements of such systems is O(NR), where N denotes the number of ports and R the data rate of each port. Clearly, for high port densities and data rates, this constraint dramatically limits the scalability of the switch. In an effort to retain the desirable attributes of output-queued switches, while significantly reducing the memory bandwidth requirements, distributed shared memory architectures, such as the parallel shared memory (PSM) switch/router, have recently received much attention. The principle advantage of the PSM architecture is derived from the use of slow-running memory units operating in parallel to distribute the memory bandwidth requirement. At the core of the PSM architecture is a memory management algorithm that determines, for each arriving packet, the memory unit in which it will be placed. However, to date, the computational complexity of this algorithm is O(N), thereby limiting the scalability of PSM switches. In an effort to overcome the scalability limitations, it is the goal of this dissertation to extend existing shared-memory architecture results while introducing the notion of Fabric on a Chip (FoC). In taking advantage of recent advancements in integrated circuit technologies, FoC aims to facilitate the consolidation of as many packet switching functions as possible on a single chip. Accordingly, this dissertation introduces a novel pipelined memory management algorithm, which plays a key role in the context of on-chip output- queued switch emulation. We discuss in detail the fundamental properties of the proposed scheme, along with hardware-based implementation results that illustrate its scalability and performance attributes. To complement the main effort and further support the notion of FoC, we provide performance analysis of output queued cell switches with heterogeneous traffic. The result is a flexible tool for obtaining bounds on the memory requirements in output queued switches under a wide range of tra¢ c scenarios. Additionally, we present a reconfigurable high-speed hardware architecture for real-time generation of packets for the various traffic scenarios. The work presented in this thesis aims at providing pragmatic foundations for designing next-generation, high-performance Internet switches and routers

    IP and ATM integration: A New paradigm in multi-service internetworking

    Get PDF
    ATM is a widespread technology adopted by many to support advanced data communication, in particular efficient Internet services provision. The expected challenges of multimedia communication together with the increasing massive utilization of IP-based applications urgently require redesign of networking solutions in terms of both new functionalities and enhanced performance. However, the networking context is affected by so many changes, and to some extent chaotic growth, that any approach based on a structured and complex top-down architecture is unlikely to be applicable. Instead, an approach based on finding out the best match between realistic service requirements and the pragmatic, intelligent use of technical opportunities made available by the product market seems more appropriate. By following this approach, innovations and improvements can be introduced at different times, not necessarily complying with each other according to a coherent overall design. With the aim of pursuing feasible innovations in the different networking aspects, we look at both IP and ATM internetworking in order to investigating a few of the most crucial topics/ issues related to the IP and ATM integration perspective. This research would also address various means of internetworking the Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) with an objective of identifying the best possible means of delivering Quality of Service (QoS) requirements for multi-service applications, exploiting the meritorious features that IP and ATM have to offer. Although IP and ATM often have been viewed as competitors, their complementary strengths and limitations from a natural alliance that combines the best aspects of both the technologies. For instance, one limitation of ATM networks has been the relatively large gap between the speed of the network paths and the control operations needed to configure those data paths to meet changing user needs. IP\u27s greatest strength, on the other hand, is the inherent flexibility and its capacity to adapt rapidly to changing conditions. These complementary strengths and limitations make it natural to combine IP with ATM to obtain the best that each has to offer. Over time many models and architectures have evolved for IP/ATM internetworking and they have impacted the fundamental thinking in internetworking IP and ATM. These technologies, architectures, models and implementations will be reviewed in greater detail in addressing possible issues in integrating these architectures s in a multi-service, enterprise network. The objective being to make recommendations as to the best means of interworking the two in exploiting the salient features of one another to provide a faster, reliable, scalable, robust, QoS aware network in the most economical manner. How IP will be carried over ATM when a commercial worldwide ATM network is deployed is not addressed and the details of such a network still remain in a state of flux to specify anything concrete. Our research findings culminated with a strong recommendation that the best model to adopt, in light of the impending integrated service requirements of future multi-service environments, is an ATM core with IP at the edges to realize the best of both technologies in delivering QoS guarantees in a seamless manner to any node in the enterprise

    On packet switch design

    Get PDF

    Switching considerations in storage networks.

    Get PDF
    by Leung Yiu Tong.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 96-98).Abstracts in English and Chinese.Chapter 1. --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Thesis Organization --- p.3Chapter 2. --- Storage Network Fundamentals --- p.4Chapter 2.1 --- Storage Network Topology --- p.4Chapter 2.1.1 --- Direct Attached Storage (DAS) --- p.5Chapter 2.1.2 --- Network Attached Storage (NAS) --- p.7Chapter 2.1.3 --- Storage Area Network (SAN) --- p.9Chapter 2.1.3.1 --- SAN and the Fibre Channel Protocol --- p.11Chapter 2.1.4 --- Summary on Storage Network Topology --- p.12Chapter 2.2 --- Storage Protocol --- p.15Chapter 2.2.1 --- Fibre Channel --- p.15Chapter 2.2.1.1 --- Fibre Channel over IP (FCIP) --- p.17Chapter 2.2.1.2 --- Internet Fibre Channel Protocol (iFCP) --- p.19Chapter 2.2.2 --- Internet SCSI (iSCSI) --- p.20Chapter 2.2.3 --- InfiniBand --- p.22Chapter 2.2.4 --- Review on Storage Network Protocol --- p.25Chapter 2.3 --- Standard Organization --- p.27Chapter 2.4 --- Summary --- p.28Chapter 3. --- Switching Design for Storage Networks --- p.30Chapter 3.1. --- Shared Bus Design --- p.32Chapter 3.2. --- Time Division Switch --- p.36Chapter 3.3. --- Share Buffer Memory Switch --- p.37Chapter 3.3.1 --- Parallel Memory Array --- p.40Chapter 3.3.2 --- Distributive Storage --- p.43Chapter 3.4. --- Crossbar Switch --- p.45Chapter 3.4.1 --- Arbitrated Crossbar vs. Buffered Crossbar --- p.46Chapter 3.4.1.1 --- Arbitrated Crossbar Switch --- p.47Chapter 3.4.1.2 --- Buffered Crossbar Switch --- p.48Chapter 3.4.2 --- Switch Scheduling --- p.49Chapter 3.4.2.1 --- Bipartite Matching --- p.50Chapter 3.4.2.2 --- Token-based Distributive Scheduling --- p.53Chapter 3.4.2.3 --- Resource Counting using Semaphore --- p.56Chapter 3.5. --- Algebraic Switches --- p.60Chapter 3.5.1 --- Switching by Conditionally Nonblocking Properties --- p.61Chapter 3.5.2 --- Self-Routing Mechanism with Zero-Bit Buffering --- p.64Chapter 3.5.3 --- Multistage Interconnection of Self-routing Concentrators --- p.69Chapter 3.6. --- Summary --- p.73Chapter 4. --- Investigating Switching Issue in Storage Networks --- p.74Chapter 4.1 --- Choosing a Suitable Switch --- p.74Chapter 4.2 --- Quality of Service (QoS) --- p.76Chapter 4.3 --- Multicasting --- p.77Chapter 4.3.1 --- Crossbar Switch --- p.78Chapter 4.3.2 --- Shared-Buffer Memory Switches --- p.80Chapter 4.3.3 --- Algebraic Switch --- p.82Chapter 4.3.4 --- Application on Multicast Transmission --- p.86Chapter 4.4 --- Load Balancing Mechanism --- p.87Chapter 4.5 --- Optimization on Storage Utilization --- p.91Chapter 4.6 --- Summary --- p.93Chapter 5. --- Conclusion and Summary of Original Contributions --- p.9

    Novel techniques in large scaleable ATM switches

    Get PDF
    Bibliography: p. 172-178.This dissertation explores the research area of large scale ATM switches. The requirements for an ATM switch are determined by overviewing the ATM network architecture. These requirements lead to the discussion of an abstract ATM switch which illustrates the components of an ATM switch that automatically scale with increasing switch size (the Input Modules and Output Modules) and those that do not (the Connection Admission Control and Switch Management systems as well as the Cell Switch Fabric). An architecture is suggested which may result in a scalable Switch Management and Connection Admission Control function. However, the main thrust of the dissertation is confined to the cell switch fabric. The fundamental mathematical limits of ATM switches and buffer placement is presented next emphasising the desirability of output buffering. This is followed by an overview of the possible routing strategies in a multi-stage interconnection network. A variety of space division switches are then considered which leads to a discussion of the hypercube fabric, (a novel switching technique). The hypercube fabric achieves good performance with an O(N.log₂N)²) scaling. The output module, resequencing, cell scheduling and output buffering technique is presented leading to a complete description of the proposed ATM switch. Various traffic models are used to quantify the switch's performance. These include a simple exponential inter-arrival time model, a locality of reference model and a self-similar, bursty, multiplexed Variable Bit Rate (VBR) model. FIFO queueing is simple to implement in an ATNI switch, however, more responsive queueing strategies can result in an improved performance. An associative memory is presented which allows the separate queues in the ATM switch to be effectively logically combined into a single FIFO queue. The associative memory is described in detail and its feasibility is shown by laying out the Integrated Circuit masks and performing an analogue simulation of the IC's performance is SPICE3. Although optimisations were required to the original design, the feasibility of the approach is shown with a 15Ƞs write time and a 160Ƞs read time for a 32 row, 8 priority bit, 10 routing bit version of the memory. This is achieved with 2µm technology, more advanced technologies may result in even better performance. The various traffic models and switch models are simulated in a number of runs. This shows the performance of the hypercube which outperforms a Clos network of equivalent technology and approaches the performance of an ideal reference fabric. The associative memory leverages a significant performance advantage in the hypercube network and a modest advantage in the Clos network. The performance of the switches is shown to degrade with increasing traffic density, increasing locality of reference, increasing variance in the cell rate and increasing burst length. Interestingly, the fabrics show no real degradation in response to increasing self similarity in the fabric. Lastly, the appendices present suggestions on how redundancy, reliability and multicasting can be achieved in the hypercube fabric. An overview of integrated circuits is provided. A brief description of commercial ATM switching products is given. Lastly, a road map to the simulation code is provided in the form of descriptions of the functionality found in all of the files within the source tree. This is intended to provide the starting ground for anyone wishing to modify or extend the simulation system developed for this thesis

    Performance evaluation of Fast Ethernet, ATM and Myrinet under PVM

    Get PDF
    Congestion in network switches can limit the communication traffic between Parallel Virtual Machine (PVM) nodes in a parallel computation. The research introduces a new benchmark to evaluate the performance of PVM in various networking environments. The benchmark is used to achieve a better understanding of performance limitations in parallel computing that are imposed by the choice of the network. The networks considered here are Fast Ethernet, Asynchronous Transfer Mode (ATM) OC-3c (155Mb/s) and Myrinet. Together, they represent an interesting range of alternatives for parallel cluster computing. A characterization of network delays and throughput and a comparison of the expected costs of the three environments are developed to provide a basis for an informed decision on the networking methods and topology for a parallel database that is being considered for FBI\u27s National DNA Indexing System (NDIS)[17]. This network is used for communications among the nodes of the parallel machine; thus the security requirements defined for the FBI\u27s Criminal Justice Information Services Division Wide Area Network (CJIS-WAN) [12] are not a concern
    • …
    corecore