160 research outputs found

    Resilient Cell Resequencing in Terabit Routers

    Get PDF
    Multistage interconnection networks with internal cell buffering and dynamic routing are among the most cost-effective architectures for multi-terabit internet routers. One of the key design issues for such systems is maintaining cell ordering, since cells are subject to varying delays as they pass through the interconnection network. The most flexible and scalable approach to cell resequencing uses timestamps and a time-ordered resequencing buffer at each router output port. Conventional, fixed-threshold resequencers can perform poorly in the presence of extreme traffic conditions. This paper explores alternative resequencer designs that are more tolerant of such traffic. These alternatives include a novel adaptive resequencer that adjusts the time cells spend waiting in the resequencing buffer, based on the recent history of the interconnection network delay. The design is straightforward to implement and requires only constant time per cell, making it suitable for systems with link speeds of up to 40 Gb/s. We show that the combination of adaptive resequencing and appropriately designed inter-connection networks can limit resequencing errors to negligible levels without requiring large resequencing latencies

    Work-Conserving Distributed Schedulers

    Get PDF
    Buffered multistage interconnection networks offer one of the most scalable and cost-effective approaches to building high capacity routers and switches. Unfortunately, the performance of such systems has been difficult to predict in the presence of the extreme traffic conditions that can arise in Internet routers. Recent work introduced the idea of distributed scheduling, to regulate the flow of traffic in such systems. This work demonstrated (using simulation and experimental measurements) that distributed scheduling can en-able robust performance, even in the presence of adversarial traffic patterns. In this paper, we show that appropriately designed distributed scheduling algorithms are provably work-conserving for speedups of 2 or more. Two of the three algorithms presented were inspired by algorithms previously developed for crossbar scheduling. The third has no direct counterpart in the crossbar scheduling context. In our analysis, we show that distributed schedulers based on blocking flows in small-depth acyclic flow graphs can be work-conserving, just as certain crossbar schedulers based on maximal bipartite matchings have been shown to be work-conserving. We also study the performance of practical variants of the work-conserving algorithms with speedups less than 2, using simulation. These studies demonstrate that distributed scheduling ensures excellent performance under extreme traffic conditions for speedups of less than 1.5

    Self-Similarity in a multi-stage queueing ATM switch fabric

    Get PDF
    Recent studies of digital network traffic have shown that arrival processes in such an environment are more accurately modeled as a statistically self-similar process, rather than as a Poisson-based one. We present a simulation of a combination sharedoutput queueing ATM switch fabric, sourced by two models of self-similar input. The effect of self-similarity on the average queue length and cell loss probability for this multi-stage queue is examined for varying load, buffer size, and internal speedup. The results using two self-similar input models, Pareto-distributed interarrival times and a Poisson-Zeta ON-OFF model, are compared with each other and with results using Poisson interarrival times and an ON-OFF bursty traffic source with Ge ometrically distributed burst lengths. The results show that at a high utilization and at a high degree of self-similarity, switch performance improves slowly with increasing buffer size and speedup, as compared to the improvement using Poisson-based traffic

    A hybrid queueing model for fast broadband networking simulation

    Get PDF
    PhDThis research focuses on the investigation of a fast simulation method for broadband telecommunication networks, such as ATM networks and IP networks. As a result of this research, a hybrid simulation model is proposed, which combines the analytical modelling and event-driven simulation modelling to speeding up the overall simulation. The division between foreground and background traffic and the way of dealing with these different types of traffic to achieve improvement in simulation time is the major contribution reported in this thesis. Background traffic is present to ensure that proper buffering behaviour is included during the course of the simulation experiments, but only the foreground traffic of interest is simulated, unlike traditional simulation techniques. Foreground and background traffic are dealt with in a different way. To avoid the need for extra events on the event list, and the processing overhead, associated with the background traffic, the novel technique investigated in this research is to remove the background traffic completely, adjusting the service time of the queues for the background traffic to compensate (in most cases, the service time for the foreground traffic will increase). By removing the background traffic from the event-driven simulator the number of cell processing events dealt with is reduced drastically. Validation of this approach shows that, overall, the method works well, but the simulation using this method does have some differences compared with experimental results on a testbed. The reason for this is mainly because of the assumptions behind the analytical model that make the modelling tractable. Hence, the analytical model needs to be adjusted. This is done by having a neural network trained to learn the relationship between the input traffic parameters and the output difference between the proposed model and the testbed. Following this training, simulations can be run using the output of the neural network to adjust the analytical model for those particular traffic conditions. The approach is applied to cell scale and burst scale queueing to simulate an ATM switch, and it is also used to simulate an IP router. In all the applications, the method ensures a fast simulation as well as an accurate result

    Design of a scheduling mechanism for an ATM switch

    Get PDF
    Includes bibliographical references.In this dissenation, the candidate proposes the use of a ratio to multiply the weights used in the matching algorithm to control the delay that individual connections encounter. We demonstrate the improved characteristics of a switch using a ratio presenting results from simulations. The candidate also proposes a novel scheduling mechanism for an input queued ATM switch. In order to evaluate the performance of the scheduling mechanism in terms of throughput and fairness, the use of various metrics, initially proposed in the literature to evaluate output buffered switches are evaluated, adjusted and applied to input scheduling. In particular the Worst-case Fairness Index (WFl) which measures the maximum delay a connection will encounter is derived for use in input queued switches

    Queuing delays in randomized load balanced networks

    Get PDF
    Valiant’s concept of Randomized Load Balancing (RLB), also promoted under the name ‘two-phase routing’, has previously been shown to provide a cost-effective way of implementing overlay networks that are robust to dynamically changing demand patterns. RLB is accomplished in two steps; in the first step, traffic is randomly distributed across the network, and in the second step traffic is routed to the final destination. One of the benefits of RLB is that packets experience only a single stage of routing, thus reducing queueing delays associated with multi-hop architectures. In this paper, we study the queuing performance of RLB, both through analytical methods and packet-level simulations using ns2 on three representative carrier networks. We show that purely random traffic splitting in the randomization step of RLB leads to higher queuing delays than pseudo-random splitting using, e.g., a round-robin schedule. Furthermore, we show that, for pseudo-random scheduling, queuing delays depend significantly on the degree of uniformity of the offered demand patterns, with uniform demand matrices representing a provably worst-case scenario. These results are independent of whether RLB employs priority mechanisms between traffic from step one over step two. A comparison with multi-hop shortest-path routing reveals that RLB eliminates the occurrence of demand-specific hot spots in the network
    • 

    corecore