26 research outputs found

    Information Switching Processor (ISP) contention analysis and control

    Get PDF
    Future satellite communications, as a viable means of communications and an alternative to terrestrial networks, demand flexibility and low end-user cost. On-board switching/processing satellites potentially provide these features, allowing flexible interconnection among multiple spot beams, direct to the user communications services using very small aperture terminals (VSAT's), independent uplink and downlink access/transmission system designs optimized to user's traffic requirements, efficient TDM downlink transmission, and better link performance. A flexible switching system on the satellite in conjunction with low-cost user terminals will likely benefit future satellite network users

    Architecture design and performance analysis of practical buffered-crossbar packet switches

    Get PDF
    Combined input crosspoint buffered (CICB) packet switches were introduced to relax inputoutput arbitration timing and provide high throughput under admissible traffic. However, the amount of memory required in the crossbar of an N x N switch is N2x k x L, where k is the crosspoint buffer size and needs to be of size RTT in cells, L is the packet size. RTT is the round-trip time which is defined by the distance between line cards and switch fabric. When the switch size is large or RTT is not negligible, the memory amount required makes the implementation costly or infeasible for buffered crossbar switches. To reduce the required memory amount, a family of shared memory combined-input crosspoint-buffered (SMCB) packet switches, where the crosspoint buffers are shared among inputs, are introduced in this thesis. One of the proposed switches uses a memory speedup of in and dynamic memory allocation, and the other switch avoids speedup by arbitrating the access of inputs to the crosspoint buffers. These two switches reduce the required memory of the buffered crossbar by 50% or more and achieve equivalent throughput under independent and identical traffic with uniform distributions when using random selections. The proposed mSMCB switch is extended to support differentiated services and long RTT. To support P traffic classes with different priorities, CICB switches have been reported to use N2x k x L x P amount of memory to avoid blocking of high priority cells.The proposed SMCB switch with support for differentiated services requires 1/mP of the memory amount in the buffered crossbar and achieves similar throughput performance to that of a CICB switch with similar priority management, while using no speedup in the shared memory. The throughput performance of SMCB switch with crosspoint buffers shared by inputs (I-SMCB) is studied under multicast traffic. An output-based shared-memory crosspoint buffered (O-SMCB) packet switch is proposed where the crosspoint buffers are shared by two outputs and use no speedup. The proposed O-SMCB switch provides high performance under admissible uniform and nonuniform multicast traffic models while using 50% of the memory used in CICB switches. Furthermore, the O-SMCB switch provides higher throughput than the I-SMCB switch. As SMCB switches can efficiently support an RTT twice as long as that supported by CICB switches and as the performance of SMCB switches is bounded by a matching between inputs and crosspoint buffers, a new family of CICB switches with flexible access to crosspoint buffers are proposed to support longer RTTs than SMCB switches and to provide higher throughput under a wide variety of admissible traffic models. The CICB switches with flexible access allow an input to use any available crosspoint buffer at a given output. The proposed switches reduce the required crosspoint buffer size by a factor of N , keep the service of cells in sequence, and use no speedup. This new class of switches achieve higher throughput performance than CICB switches under a large variety of traffic models, while supporting long RTTs. Crosspoint buffered switches that are implemented in single chips have limited scalability. To support a large number of ports in crosspoint buffered switches, memory-memory-memory (MMM) Clos-network switches are an alternative. The MMM switches that use minimum memory amount at the central module is studied. Although, this switch can provide a moderate throughput, MMM switch may serve cells out of sequence. As keeping cells in sequence in an MMM switch may require buffers be distributed per flow, an MMM with extended memory in the switch modules is studied. To solve the out of sequence problem in MMM switches, a queuing architecture is proposed for an MMM switch. The service of cells in sequence is analyzed

    Performance analysis of virtual path over large-scale ATM switches.

    Get PDF
    by Tang Oo.Thesis submitted in: December 1997.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 68-[75]).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- The Concept of Cross-Path Switching --- p.8Chapter 1.3 --- Contribution and Organization of Thesis --- p.12Chapter 2 --- The Virtual Path Scheduling Scheme --- p.14Chapter 2.1 --- The Trade-off Between Throughput and Concentration Loss --- p.14Chapter 2.2 --- Partition of Virtual Paths --- p.19Chapter 2.3 --- The Capacity and Route Assignment of Virtual Paths --- p.21Chapter 3 --- Performance Analysis and Simulation Results --- p.28Chapter 3.1 --- The Improvement of Concentration Loss --- p.28Chapter 3.2 --- The Throughput with Look-ahead Scheme --- p.30Chapter 3.3 --- The Throughput with Input Smoothing Scheme --- p.34Chapter 3.4 --- The Throughput with Bursty Source --- p.37Chapter 3.5 --- Buffer Dimensioning and The Cell Loss Probability Due to Buffer Overflow --- p.38Chapter 4 --- Capacity Assignment and Evaluation of Multiplexing Gain --- p.47Chapter 4.1 --- Principle of Capacity Assignment --- p.47Chapter 4.2 --- The Model of Virtual Path --- p.49Chapter 4.3 --- Capacity Assignment for CBR Service --- p.51Chapter 4.4 --- Capacity Assignment for Real-time VBR Service --- p.53Chapter 4.5 --- Capacity Assignment for Non Real-time VBR Service --- p.55Chapter 4.6 --- Capacity Matrix --- p.56Chapter 4.7 --- The Evaluation of Multiplexing Gain of Input Stage --- p.58Chapter 5 --- Discussions and Conclusions --- p.64Bibliography --- p.6

    Switching techniques for broadband ISDN

    Get PDF
    The properties of switching techniques suitable for use in broadband networks have been investigated. Methods for evaluating the performance of such switches have been reviewed. A notation has been introduced to describe a class of binary self-routing networks. Hence a technique has been developed for determining the nature of the equivalence between two networks drawn from this class. The necessary and sufficient condition for two packets not to collide in a binary self-routing network has been obtained. This has been used to prove the non-blocking property of the Batcher-banyan switch. A condition for a three-stage network with channel grouping and link speed-up to be nonblocking has been obtained, of which previous conditions are special cases. A new three-stage switch architecture has been proposed, based upon a novel cell-level algorithm for path allocation in the intermediate stage of the switch. The algorithm is suited to hardware implementation using parallelism to achieve a very short execution time. An array of processors is required to implement the algorithm The processor has been shown to be of simple design. It must be initialised with a count representing the number of cells requesting a given output module. A fast method has been described for performing the request counting using a non-blocking binary self-routing network. Hardware is also required to forward routing tags from the processors to the appropriate data cells, when they have been allocated a path through the intermediate stage. A method of distributing these routing tags by means of a non-blocking copy network has been presented. The performance of the new path allocation algorithm has been determined by simulation. The rate of cell loss can increase substantially in a three-stage switch when the output modules are non-uniformly loaded. It has been shown that the appropriate use of channel grouping in the intermediate stage of the switch can reduce the effect of non-uniform loading on performance

    On packet switch design

    Get PDF

    A Performance evaluation of several ATM switching architectures

    Get PDF
    The goal of this thesis is to evaluate the performance of three Asynchronous Transfer Mode switching architectures. After examining many different ATM switching architectures in literature, the three architectures chosen for study were the Knockout switch, the Sunshine switch, and the Helical switch. A discrete-time, event driven system simulator, named ProModel, was used to model the switching behavior of these architectures. Each switching architecture was modeled and studied under at least two design configurations. The performance of the three architectures was then investigated under three different traffic types representative of traffic found in B-ISDN: random, constant bit rate, and bursty. Several key performance parameters were measured and compared between the architectures. This thesis also explored the implementation complexities and fault tolerance of the three selected architectures

    Optical flow switched networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 253-279).In the four decades since optical fiber was introduced as a communications medium, optical networking has revolutionized the telecommunications landscape. It has enabled the Internet as we know it today, and is central to the realization of Network-Centric Warfare in the defense world. Sustained exponential growth in communications bandwidth demand, however, is requiring that the nexus of innovation in optical networking continue, in order to ensure cost-effective communications in the future. In this thesis, we present Optical Flow Switching (OFS) as a key enabler of scalable future optical networks. The general idea behind OFS-agile, end-to-end, all-optical connections-is decades old, if not as old as the field of optical networking itself. However, owing to the absence of an application for it, OFS remained an underdeveloped idea-bereft of how it could be implemented, how well it would perform, and how much it would cost relative to other architectures. The contributions of this thesis are in providing partial answers to these three broad questions. With respect to implementation, we address the physical layer design of OFS in the metro-area and access, and develop sensible scheduling algorithms for OFS communication. Our performance study comprises a comparative capacity analysis for the wide-area, as well as an analytical approximation of the throughput-delay tradeoff offered by OFS for inter-MAN communication. Lastly, with regard to the economics of OFS, we employ an approximate capital expenditure model, which enables a throughput-cost comparison of OFS with other prominent candidate architectures. Our conclusions point to the fact that OFS offers significant advantage over other architectures in economic scalability.(cont.) In particular, for sufficiently heavy traffic, OFS handles large transactions at far lower cost than other optical network architectures. In light of the increasing importance of large transactions in both commercial and defense networks, we conclude that OFS may be crucial to the future viability of optical networking.by Guy E. Weichenberg.Ph.D

    Optical Wireless Data Center Networks

    Get PDF
    Bandwidth and computation-intensive Big Data applications in disciplines like social media, bio- and nano-informatics, Internet-of-Things (IoT), and real-time analytics, are pushing existing access and core (backbone) networks as well as Data Center Networks (DCNs) to their limits. Next generation DCNs must support continuously increasing network traffic while satisfying minimum performance requirements of latency, reliability, flexibility and scalability. Therefore, a larger number of cables (i.e., copper-cables and fiber optics) may be required in conventional wired DCNs. In addition to limiting the possible topologies, large number of cables may result into design and development problems related to wire ducting and maintenance, heat dissipation, and power consumption. To address the cabling complexity in wired DCNs, we propose OWCells, a class of optical wireless cellular data center network architectures in which fixed line of sight (LOS) optical wireless communication (OWC) links are used to connect the racks arranged in regular polygonal topologies. We present the OWCell DCN architecture, develop its theoretical underpinnings, and investigate routing protocols and OWC transceiver design. To realize a fully wireless DCN, servers in racks must also be connected using OWC links. There is, however, a difficulty of connecting multiple adjacent network components, such as servers in a rack, using point-to-point LOS links. To overcome this problem, we propose and validate the feasibility of an FSO-Bus to connect multiple adjacent network components using NLOS point-to-point OWC links. Finally, to complete the design of the OWC transceiver, we develop a new class of strictly and rearrangeably non-blocking multicast optical switches in which multicast is performed efficiently at the physical optical (lower) layer rather than upper layers (e.g., application layer). Advisors: Jitender S. Deogun and Dennis R. Alexande
    corecore