538 research outputs found

    Achieving 100% throughput for multicast traffic in input-queued switches

    Get PDF
    A general approach of designing input-queued multicast switch is to employ multicast switch fabric, where packets can be replicated inside the switch fabric. As compared with unicast switch fabric, the achievable traffic rate region of a switch can be increased, but it is still less than the admissible traffic rate region. In other words, achieving 100% throughput for any admissible multicast traffic pattern is not possible. In this paper, we first revisit the fundamental problems faced by input-queued switch in supporting multicast traffic. We then argue that multicast switch fabric is not necessary if a load-balanced approach is followed. Accordingly, an existing load-balanced two-stage switch architecture [12], consisting of unicast switch fabrics, can be adopted to provide 100% throughput for any admissible multicast traffic pattern. Since the two-stage switch requires no speedup in both switch fabric and packet buffers, we consider it a two-stage input-queued switch. It can be seen that its implementation complexity is much lower than conventional (single-stage) input-queued multicast switches. As compared with the work in [12], our approach is more systematic and we propose a more effective load balancing mechanism. © 2011 IEEE.link_to_subscribed_fulltextProceedings of the IEEE Global Telecommunications Conference (GLOBECOM 2011), Houston, TX, USA, 5-9 December 201

    Network Coding in a Multicast Switch

    Full text link
    We consider the problem of serving multicast flows in a crossbar switch. We show that linear network coding across packets of a flow can sustain traffic patterns that cannot be served if network coding were not allowed. Thus, network coding leads to a larger rate region in a multicast crossbar switch. We demonstrate a traffic pattern which requires a switch speedup if coding is not allowed, whereas, with coding the speedup requirement is eliminated completely. In addition to throughput benefits, coding simplifies the characterization of the rate region. We give a graph-theoretic characterization of the rate region with fanout splitting and intra-flow coding, in terms of the stable set polytope of the 'enhanced conflict graph' of the traffic pattern. Such a formulation is not known in the case of fanout splitting without coding. We show that computing the offline schedule (i.e. using prior knowledge of the flow arrival rates) can be reduced to certain graph coloring problems. Finally, we propose online algorithms (i.e. using only the current queue occupancy information) for multicast scheduling based on our graph-theoretic formulation. In particular, we show that a maximum weighted stable set algorithm stabilizes the queues for all rates within the rate region.Comment: 9 pages, submitted to IEEE INFOCOM 200

    Network Coding for Speedup in Switches

    Get PDF
    We present a graph theoretic upper bound on speedup needed to achieve 100% throughput in a multicast switch using network coding. By bounding speedup, we show the equivalence between network coding and speedup in multicast switches - i.e. network coding, which is usually implemented using software, can in many cases substitute speedup, which is often achieved by adding extra switch fabrics. This bound is based on an approach to network coding problems called the "enhanced conflict graph". We show that the "imperfection ratio" of the enhanced conflict graph gives an upper bound on speedup. In particular, we apply this result to K-by-N switches with traffic patterns consisting of unicasts and broadcasts only to obtain an upper bound of min{(2K-1)/K, 2N/(N+1)}.Comment: 5 pages, 4 figures, IEEE ISIT 200

    FTMS: an efficient multicast scheduling algorithm for feedback-based two-stage switch

    Get PDF
    Session - NGNI02: Router Architecture & Switch DesignTwo major challenges in designing high-speed multicast switches are the expensive multicast switch fabric and the highly complicated central scheduler. While the recent load-balanced switch architecture uses simple unicast switch fabric and does not require a central scheduler, it is only good at handling unicast traffic. In this paper, we extend an existing load-balanced switch called feedback-based two-stage switch to support multicast traffic. In particular, an efficient multicast scheduling algorithm (FTMS) is designed. With FTMS, head-of-line (HOL) packet blocking at each input port is eliminated by adopting 'pointer' queues. To cut down queuing delay, packet replication is carried out at middle-stage ports. As compared with other multicast scheduling algorithms, simulation results show that our FTMS always provides the highest throughput. © 2012 IEEE.published_or_final_versio

    High Performance Queueing and Scheduling in Support of Multicasting in Input-Queued Switches

    Get PDF
    Due to its mild requirement on the bandwidth of switching fabric and internal memory, the input-queued architecture is a practical solution for today\u27s very high-speed switches. One of the notoriously difficult problems in the design of input-queued switches with very high link rates is the high performance queueing and scheduling of multicast traffic. This dissertation focuses on proposing novel solutions for this problem. The design challenge stems from the nature of multicast traffic, i.e., a multicast packet typically has multiple destinations. On the one hand, this nature makes queueing and scheduling of multicast traffic much more difficult than that of unicast traffic. For example, virtual output queueing is widely used to completely avoid the head-of-line blocking and achieve 100% throughput for unicast traffic. Nevertheless, the exhaustive, multicast virtual output queueing is impractical and results in out-of-order delivery. On the other hand, in spite of extensive studies in the context of either pure unicast traffic or pure multicast traffic, the results from a study in one context are not applicable to the other context due to the difference between the natures of unicast and multicast traffic. The design of integrated scheduling for both types of traffic remains an open issue. The main contribution of this dissertation is twofold: firstly, the performance of an interesting approach to efficiently mitigate head-of-line blocking for multicast traffic is theoretically analyzed; secondly, two novel algorithms are proposed to efficiently integrate unicast and multicast scheduling within one switching fabric. The research work presented in this dissertation concludes that (1) a small number of queues are sufficient to maximize the saturation throughput and delay performances of a large multicast switch with multiple first-in-first-out queues per input port; (2) the theoretical analysis results are indeed valid for practical large-sized switches; (3) for a large M × N multicast switch, the final achievable saturation throughput decreases as the ratio of M/N decreases; (4) and the two proposed integration algorithms exhibit promising performances in terms of saturation throughput, delay, and packet loss ratio under both uniform Bernoulli and uniform bursty traffic

    Design and analysis of a scalable terabit multicast packet switch : architecture and scheduling algorithms

    Get PDF
    Internet growth and success not only open a primary route of information exchange for millions of people around the world, but also create unprecedented demand for core network capacity. Existing switches/routers, due to the bottleneck from either switch architecture or arbitration complexity, can reach a capacity on the order of gigabits per second, but few of them are scalable to large capacity of terabits per second. In this dissertation, we propose three novel switch architectures with cooperated scheduling algorithms to design a terabit backbone switch/router which is able to deliver large capacity, multicasting, and high performance along with Quality of Service (QoS). Our switch designs benefit from unique features of modular switch architecture and distributed resource allocation scheme. Switch I is a unique and modular design characterized by input and output link sharing. Link sharing resolves output contention and eliminates speedup requirement for central switch fabric. Hence, the switch architecture is scalable to any large size. We propose a distributed round robin (RR) scheduling algorithm which provides fairness and has very low arbitration complexity. Switch I can achieve good performance under uniform traffic. However, Switch I does not perform well for non-uniform traffic. Switch II, as a modified switch design, employs link sharing as well as a token ring to pursue a solution to overcome the drawback of Switch 1. We propose a round robin prioritized link reservation (RR+POLR) algorithm which results in an improved performance especially under non-uniform traffic. However, RR+POLR algorithm is not flexible enough to adapt to the input traffic. In Switch II, the link reservation rate has a great impact on switch performance. Finally, Switch III is proposed as an enhanced switch design using link sharing and dual round robin rings. Packet forwarding is based on link reservation. We propose a queue occupancy based dynamic link reservation (QOBDLR) algorithm which can adapt to the input traffic to provide a fast and fair link resource allocation. QOBDLR algorithm is a distributed resource allocation scheme in the sense that dynamic link reservation is carried out according to local available information. Arbitration complexity is very low. Compared to the output queued (OQ) switch which is known to offer the best performance under any traffic pattern, Switch III not only achieves performance as good as the OQ switch, but also overcomes speedup problem which seriously limits the OQ switch to be a scalable switch design. Hence, Switch III would be a good choice for high performance, scalable, large-capacity core switches
    corecore