24,396 research outputs found

    Multicast scheduling in feedback-based two-stage switch

    Get PDF
    Proceedings of the IEEE Workshop on High Performance Switching and Routing, 2009, p. 28-33Scalability is of paramount importance in high-speed switch design. Two limiting factors are the complexity of switch fabric and the need for a sophisticated central scheduler. In this paper, we focus on designing a scalable multicast switch. Given the fact that the majority traffic on the Internet is unicast, a cost-effective solution is to adopt a unicast switch fabric for handling both unicast and multicast traffic. Unlike existing approaches, we choose to base our multicast switch design on the load-balanced two-stage switch architecture because it does not require a central scheduler, and its unicast switch fabric only needs to realize N switch configurations. Specifically, we adopt the feedback-based two-stage switch architecture [10], because it elegantly solves the notorious packet mis-sequencing problem, and yet renders an excellent throughput-delay performance. By slightly modifying the operation of the original feedback-based two-stage switch, a simple distributed multicast scheduling algorithm is proposed. Simulation results show that with packet duplication at both input ports and middle-stage ports, the proposed multicast scheduling algorithm significantly cuts down the average packet delay and delay variation among different copies of the same multicast packet. Keywords-Feedback-based two-stage switch, scalable multicast switch, load-balanced switch. © 2009 IEEE.published_or_final_versio

    Architecture, design, and modeling of the OPSnet asynchronous optical packet switching node

    Get PDF
    An all-optical packet-switched network supporting multiple services represents a long-term goal for network operators and service providers alike. The EPSRC-funded OPSnet project partnership addresses this issue from device through to network architecture perspectives with the key objective of the design, development, and demonstration of a fully operational asynchronous optical packet switch (OPS) suitable for 100 Gb/s dense-wavelength-division multiplexing (DWDM) operation. The OPS is built around a novel buffer and control architecture that has been shown to be highly flexible and to offer the promise of fair and consistent packet delivery at high load conditions with full support for quality of service (QoS) based on differentiated services over generalized multiprotocol label switching

    A three-stage ATM switch with cell-level path allocation

    Get PDF
    A method is described for performing routing in three-stage asynchronous transfer mode (ATM) switches which feature multiple channels between the switch modules in adjacent stages. The method is suited to hardware implementation using parallelism to achieve a very short execution time. This allows cell-level routing to be performed, whereby routes are updated in each time slot. The algorithm allows a contention-free routing to be performed, so that buffering is not required in the intermediate stage. An algorithm with this property, which preserves the cell sequence, is referred to as a path allocation algorithm. A detailed description of the necessary hardware is presented. This hardware uses a novel circuit to count the number of cells requesting each output module, it allocates a path through the intermediate stage of the switch to each cell, and it generates a routing tag for each cell, indicating the path assigned to it. The method of routing tag assignment described employs a nonblocking copy network. The use of highly parallel hardware reduces the clock rate required of the circuitry, for a given-switch size. The performance of ATM switches using this path allocation algorithm has been evaluated by simulation, and is described

    Dataplane Specialization for High-performance OpenFlow Software Switching

    Get PDF
    OpenFlow is an amazingly expressive dataplane program- ming language, but this expressiveness comes at a severe performance price as switches must do excessive packet clas- sification in the fast path. The prevalent OpenFlow software switch architecture is therefore built on flow caching, but this imposes intricate limitations on the workloads that can be supported efficiently and may even open the door to mali- cious cache overflow attacks. In this paper we argue that in- stead of enforcing the same universal flow cache semantics to all OpenFlow applications and optimize for the common case, a switch should rather automatically specialize its dat- aplane piecemeal with respect to the configured workload. We introduce ES WITCH , a novel switch architecture that uses on-the-fly template-based code generation to compile any OpenFlow pipeline into efficient machine code, which can then be readily used as fast path. We present a proof- of-concept prototype and we demonstrate on illustrative use cases that ES WITCH yields a simpler architecture, superior packet processing speed, improved latency and CPU scala- bility, and predictable performance. Our prototype can eas- ily scale beyond 100 Gbps on a single Intel blade even with complex OpenFlow pipelines
    corecore