200 research outputs found

    Control Plane Hardware Design for Optical Packet Switched Data Centre Networks

    Get PDF
    Optical packet switching for intra-data centre networks is key to addressing traffic requirements. Photonic integration and wavelength division multiplexing (WDM) can overcome bandwidth limits in switching systems. A promising technology to build a nanosecond-reconfigurable photonic-integrated switch, compatible with WDM, is the semiconductor optical amplifier (SOA). SOAs are typically used as gating elements in a broadcast-and-select (B\&S) configuration, to build an optical crossbar switch. For larger-size switching, a three-stage Clos network, based on crossbar nodes, is a viable architecture. However, the design of the switch control plane, is one of the barriers to packet switching; it should run on packet timescales, which becomes increasingly challenging as line rates get higher. The scheduler, used for the allocation of switch paths, limits control clock speed. To this end, the research contribution was the design of highly parallel hardware schedulers for crossbar and Clos network switches. On a field-programmable gate array (FPGA), the minimum scheduler clock period achieved was 5.0~ns and 5.4~ns, for a 32-port crossbar and Clos switch, respectively. By using parallel path allocation modules, one per Clos node, a minimum clock period of 7.0~ns was achieved, for a 256-port switch. For scheduler application-specific integrated circuit (ASIC) synthesis, this reduces to 2.0~ns; a record result enabling scalable packet switching. Furthermore, the control plane was demonstrated experimentally. Moreover, a cycle-accurate network emulator was developed to evaluate switch performance. Results showed a switch saturation throughput at a traffic load 60\% of capacity, with sub-microsecond packet latency, for a 256-port Clos switch, outperforming state-of-the-art optical packet switches

    Admission control in Flow-Aware Networking (FAN) architectures under GridFTP traffic

    Full text link
    This is the author’s version of a work that was accepted for publication in Optical Switching and Networking. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Optical Switching and Networking, 6, 9 (2009) DOI: 10.1016/j.osn.2008.05.003Selected papers from First International Symposium on Advanced Networks and Telecommunication Systems, ANTS 2007Computing and networking resources virtualization is the main objective of Grid services. Such a concept is already used in the context of Web-services on the Internet. In the next few years, a large number of applications belonging to various domains (biotechnology, banking, finance, car and aircraft manufacturing, nuclear energy etc.) will also benefit from Grid services. Admission control is a key functionality for Quality of Service (QoS) provision in IP networks, and more specifically for Grid services provision. Service differentiation (DS) is a widely deployed technique on the Internet. It operates at the packet level on a best-effort mode. Flow-Aware Networking (FAN) that operates at the scale of the IP flows relies on implicit flow differentiation through priority fair queuing (PFQ). It may be seen as an alternative to DS. A Grid session may be seen as a succession of parallel TCP/IP flows characterized by data transfers with much larger volume than usual TCP/IP flows. In this paper, we propose an extension of FAN for the Grid environment called Grid over FAN (GoFAN). We compare, by means of computer simulations, the efficiency of Grid over DS (GoDS) and GoFAN. Two variants of GoFAN architectures based on different fair queuing algorithms are considered. As a first step, we provide two short surveys on QoS for Grid environment and on QoS in IP networks respectively

    Parallel Desynchronized Block Matching: A Feasible Scheduling Algorithm for the Input-Buffered Wavelength-Routed Switch

    Get PDF
    The input-buffered wavelength-routed (IBWR) switch is a promising switching architecture for slotted optical packet switching (OPS) networks. The benefits of the IBWR fabric are a better scalability and lower hardware cost, when compared to output buffered OPS proposals. A previous work characterized the scheduling problem of this architecture as a type of matching problem in bipartite graphs. This characterization establishes an interesting relation between the IBWR scheduling and the scheduling of electronic virtual output queuing switches. In this paper, this relation is further explored, for the design of feasible IBWR scheduling algorithms, in terms of hardware implementation and execution time. As a result, the parallel desynchronized block matching (PDBM) algorithm is proposed. The evaluation results presented reveal that IBWR switch performance using the PDBM algorithm is close to the performance bound given by OPS output buffered architectures. The performance gap is especially small for dense wavelength division multiplexing (DWDM) architectures.This research has been funded by the Spanish MCyT grant TEC2004-05622-C04-02/TCM (ARPaq). Authors would like to thank also the COST 291 action and the e-Photon/ONe+ European Network of Excellence

    PERFORMANCE ASSESSMENT OF SCHEDULERS IN OPTICAL INTERCONNECTION NETWORKS

    Get PDF
    With ever-increasing demand for high-performance computing systems, interconnection networks, serving as the communication links in multicore architectures have become a key element for guaranteeing the system performance. Compared with bandwidth-limited power hungry electrical interconnection networks, optical integrated interconnection networks also referred to as networks-on-chip (ONoC) architectures are emerging as a promising alternative to enable future computing performance. In ONoC architectures, scheduling algorithms are necessary for avoiding packet collisions while achieving high throughput, low latency, and good fairness. Scheduling algorithms exist for non-blocking electrical NoC. These algorithms can be applied to ONoC, while accounting for additional constraints arising from optical component limitations. In this thesis various scheduling algorithms are simulated, With the objective of comparing their latency and throughput using C + + programming language for ONoC with bus and ring topologies. An optimal scheduler based on two-step scheduling (TSS) technique is proposed. The optimal TSS models the scheduling problem in two steps for ONoC. The first step is the matching step which is done by representing each node pair as input bipartite graph then matching takes place between the input and output ports. The second step performs the wavelength assignment between each paired node while avoiding collisions and also with the consideration of wavelength continuity. The two-step approach with the iSLIP and MWM algorithms are considered. The proposed optimal TSS is simulated and its performances are evaluated. The optimal scheduler with maximum weighted matching (MWM) scheduling policy achieves better results in comparison to iSLIP scheduling policy based on queue length under any packet arrival process. The optimal MWM scheduling policy achieved better performance for both bus and ring topologies. The main result is that unidirectional ring topology outperforms the bus topology for any number of wavelengths less or equal to the number of ONoC port, even if the average path length is longer. The reason is that in the bus topology half of the wavelengths are allocated in each direction, fixing the maximum number of packets in each direction using two transceivers per node can compensate this issue, reaching to better performance than the ring

    Load-balanced optical switch for high-speed router design

    Get PDF
    A hybrid electro-optic router is attractive, where packet buffering and table lookup are carried out in electrical domain and switching is done optically. In this paper, we propose a loadbalanced optical switch (LBOS) fabric for a hybrid router. LBOS comprises N linecards connected by an N-wavelength WDM fiber ring. Each linecard i is configured to receive on channel λ i. To send a packet, it can select and transmit on an idle channel based on where the packet goes. The packet remains in the optical domain all the way from an input linecard/port to an output linecard/port. Meanwhile, the loading in the ring network is perfectly balanced by spreading the packets for different destinations to use different wavelengths, and packets for the same destination to use different time slots. With the pipelined operation of the LBOS, we show that LBOS is an optical counterpart of an efficient load-balanced electronic switch, and close-to-100% throughput can be obtained. To address the ringfairness problem under the inadmissible traffic patterns, an efficient throughput-fair scheduler for LBOS is also devised. ©2010 IEEE.published_or_final_versio

    Parallel Modular Scheduler Design for Clos Switches in Optical Data Center Networks

    Get PDF
    As data centers enter the exascale computing era, the traffic exchanged between internal network nodes, increases exponentially. Optical networking is an attractive solution to deliver the high capacity, low latency, and scalable interconnection needed. Among other switching methods, packet switching is particularly interesting as it can be widely deployed in the network to handle rapidly-changing traffic of arbitrary size. Nanosecond-reconfigurable photonic integrated switch fabrics, built as multi-stage architectures such as the Clos network, are key enablers to scalable packet switching. However, the accompanying control plane needs to also operate on packet timescales. Designing a central scheduler, to control an optical packet switch in nanoseconds, presents a challenge especially as the switch size increases. To this end, we present a highly-parallel, modular scheduler design for Clos switches along with a proposed routing scheme to enable nanosecond scalable scheduling. We synthesize our scheduler as an application-specific integrated circuit (ASIC) and demonstrate scaling to a 256 × 256 size with an ultra-low scheduling delay of only 6.0 ns. In a cycle-accurate rack-scale network emulation, for this switch size, we show a minimum end-to-end latency of 30.8 ns and maintain nanosecond average latency up to 80% of input traffic load. We achieve zero packet loss and short-tailed packet latency distributions for all traffic loads and switch sizes. Our work is compared to state-of-the-art optical switches, in terms of scheduling delay, packet latency, and switch throughput
    • 

    corecore