26 research outputs found
Shortest Path versus Multi-Hub Routing in Networks with Uncertain Demand
We study a class of robust network design problems motivated by the need to
scale core networks to meet increasingly dynamic capacity demands. Past work
has focused on designing the network to support all hose matrices (all matrices
not exceeding marginal bounds at the nodes). This model may be too conservative
if additional information on traffic patterns is available. Another extreme is
the fixed demand model, where one designs the network to support peak
point-to-point demands. We introduce a capped hose model to explore a broader
range of traffic matrices which includes the above two as special cases. It is
known that optimal designs for the hose model are always determined by
single-hub routing, and for the fixed- demand model are based on shortest-path
routing. We shed light on the wider space of capped hose matrices in order to
see which traffic models are more shortest path-like as opposed to hub-like. To
address the space in between, we use hierarchical multi-hub routing templates,
a generalization of hub and tree routing. In particular, we show that by adding
peak capacities into the hose model, the single-hub tree-routing template is no
longer cost-effective. This initiates the study of a class of robust network
design (RND) problems restricted to these templates. Our empirical analysis is
based on a heuristic for this new hierarchical RND problem. We also propose
that it is possible to define a routing indicator that accounts for the
strengths of the marginals and peak demands and use this information to choose
the appropriate routing template. We benchmark our approach against other
well-known routing templates, using representative carrier networks and a
variety of different capped hose traffic demands, parameterized by the relative
importance of their marginals as opposed to their point-to-point peak demands
Queuing delays in randomized load balanced networks
Valiantâs concept of Randomized Load Balancing
(RLB), also promoted under the name âtwo-phase routingâ,
has previously been shown to provide a cost-effective way of
implementing overlay networks that are robust to dynamically
changing demand patterns. RLB is accomplished in two steps; in
the first step, traffic is randomly distributed across the network,
and in the second step traffic is routed to the final destination.
One of the benefits of RLB is that packets experience only a
single stage of routing, thus reducing queueing delays associated
with multi-hop architectures. In this paper, we study the queuing
performance of RLB, both through analytical methods and
packet-level simulations using ns2 on three representative carrier
networks. We show that purely random traffic splitting in the
randomization step of RLB leads to higher queuing delays than
pseudo-random splitting using, e.g., a round-robin schedule.
Furthermore, we show that, for pseudo-random scheduling,
queuing delays depend significantly on the degree of uniformity
of the offered demand patterns, with uniform demand matrices
representing a provably worst-case scenario. These results are
independent of whether RLB employs priority mechanisms
between traffic from step one over step two. A comparison with
multi-hop shortest-path routing reveals that RLB eliminates the
occurrence of demand-specific hot spots in the network
Feedback-based scheduling for load-balanced two-stage switches
A framework for designing feedback-based scheduling algorithms is proposed for elegantly solving the notorious packet missequencing problem of a load-balanced switch. Unlike existing approaches, we show that the efforts made in load balancing and keeping packets in order can complement each other. Specifically, at each middle-stage port between the two switch fabrics of a load-balanced switch, only a single-packet buffer for each virtual output queueing (VOQ) is required. Although packets belonging to the same flow pass through different middle-stage VOQs, the delays they experience at different middle-stage ports will be identical. This is made possible by properly selecting and coordinating the two sequences of switch configurations to form a joint sequence with both staggered symmetry property and in-order packet delivery property. Based on the staggered symmetry property, an efficient feedback mechanism is designed to allow the right middle-stage port occupancy vector to be delivered to the right input port at the right time. As a result, the performance of load balancing as well as the switch throughput is significantly improved. We further extend this feedback mechanism to support the multicabinet implementation of a load-balanced switch, where the propagation delay between switch linecards and switch fabrics is nonnegligible. As compared to the existing load-balanced switch architectures and scheduling algorithms, our solutions impose a modest requirement on switch hardware, but consistently yield better delay-throughput performance. Last but not least, some extensions and refinements are made to address the scalability, implementation, and fairness issues of our solutions. © 2009 IEEE.published_or_final_versio
A DRAM/SRAM memory scheme for fast packet buffers
We address the design of high-speed packet buffers for Internet routers. We use a general DRAM/SRAM architecture for which previous proposals can be seen as particular cases. For this architecture, large SRAMs are needed to sustain high line rates and a large number of interfaces. A novel algorithm for DRAM bank allocation is presented that reduces the SRAM size requirements of previously proposed schemes by almost an order of magnitude, without having memory fragmentation problems. A technological evaluation shows that our design can support thousands of queues for line rates up to 160 Gbps.Peer ReviewedPostprint (published version
Can Software Routers Scale?
Software routers can lead us from a network of special-purpose hardware routers to one of general-purpose extensible infrastructure--if, that is, they can scale to high speeds. We identify the challenges in achieving this scalability and propose a solution: a cluster-based router architecture that uses an interconnect of commodity server platforms to build software routers that are both incrementally scalable and fully programmable