631 research outputs found

    Measuring and Understanding Throughput of Network Topologies

    Full text link
    High throughput is of particular interest in data center and HPC networks. Although myriad network topologies have been proposed, a broad head-to-head comparison across topologies and across traffic patterns is absent, and the right way to compare worst-case throughput performance is a subtle problem. In this paper, we develop a framework to benchmark the throughput of network topologies, using a two-pronged approach. First, we study performance on a variety of synthetic and experimentally-measured traffic matrices (TMs). Second, we show how to measure worst-case throughput by generating a near-worst-case TM for any given topology. We apply the framework to study the performance of these TMs in a wide range of network topologies, revealing insights into the performance of topologies with scaling, robustness of performance across TMs, and the effect of scattered workload placement. Our evaluation code is freely available

    Shortest Path versus Multi-Hub Routing in Networks with Uncertain Demand

    Full text link
    We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixed- demand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands

    Enabling Work-conserving Bandwidth Guarantees for Multi-tenant Datacenters via Dynamic Tenant-Queue Binding

    Full text link
    Today's cloud networks are shared among many tenants. Bandwidth guarantees and work conservation are two key properties to ensure predictable performance for tenant applications and high network utilization for providers. Despite significant efforts, very little prior work can really achieve both properties simultaneously even some of them claimed so. In this paper, we present QShare, an in-network based solution to achieve bandwidth guarantees and work conservation simultaneously. QShare leverages weighted fair queuing on commodity switches to slice network bandwidth for tenants, and solves the challenge of queue scarcity through balanced tenant placement and dynamic tenant-queue binding. QShare is readily implementable with existing switching chips. We have implemented a QShare prototype and evaluated it via both testbed experiments and simulations. Our results show that QShare ensures bandwidth guarantees while driving network utilization to over 91% even under unpredictable traffic demands.Comment: The initial work is published in IEEE INFOCOM 201

    Auto-bandwidth control in dynamically reconfigured hybrid-SDN MPLS networks

    Get PDF
    The proposition of this work is based on the steady evolution of bandwidth demanding technology, which currently and more so in future, requires operators to use expensive infrastructure capability smartly to maximise its use in a very competitive environment. In this thesis, a traffic engineering control loop is proposed that dynamically adjusts the bandwidth and route of flows of Multi-Protocol Label Switching (MPLS) tunnels in response to changes in traffic demand. Available bandwidth is shifted to where the demand is, and where the demand requirement has dropped, unused allocated bandwidth is returned to the network. An MPLS network enhanced with Software-defined Networking (SDN) features is implemented. The technology known as hybrid SDN combines the programmability features of SDN with the robust MPLS label switched path features along with traffic engineering enhancements introduced by routing protocols such as Border Gateway Patrol-Traffic Engineering (BGP-TE) and Open Shortest Path First-Traffic Engineering (OSPF-TE). The implemented mixed-integer linear programming formulation using the minimisation of maximum link utilisation and minimum link cost objective functions, combined with the programmability of the hybrid SDN network allows for source to destination demand fluctuations. A key driver to this research is the programmability of the MPLS network, enhanced by the contributions that the SDN controller technology introduced. The centralised view of the network provides the network state information needed to drive the mathematical modelling of the network. The path computation element further enables control of the label switched path's bandwidths, which is adjusted based on current demand and optimisation method used. The hose model is used to specify a range of traffic conditions. The most important benefit of the hose model is the flexibility that is allowed in how the traffic matrix can change if the aggregate traffic demand does not exceed the hose maximum bandwidth specification. To this end, reserved hose bandwidth can now be released to the core network to service demands from other sites

    Energy Efficient Network Resource Allocation Scheme for Hose Model

    Get PDF
    Given the exponential growth in telecommunication networks, more and more attention is being paid to their energy consumption. However, the often over-provisioned wired network is still overlooked. In core networks, pairs of routers are typically connected by multiple physical cables that form one logical bundled link participating in the intra-domain routing protocol. To reduce the energy consumption of hose-model networks with bundled cables, we propose a scheme to deactivate the maximum number of cables, and associated equipment, possible. A similar approach has been presented for the pipe model, where the exact traffic matrix is assumed to be known. Due to traffic uncertainty, however, it is difficult for operators to have exact knowledge of the traffic matrix. This traffic uncertainty can be avoided by using the hose model, which specifies only the upper bounds of the egress/ingress traffic from/to a node. We introduce a mixed integer linear problem formulation that yields the optimal solution and a more practical and near optimal heuristic algorithm for large networks. Our performance evaluation results show that it offers up to 50% power reduction compared to shortest path routing.電気通信大学201

    Providing guaranteed QoS in the hose-modeled VPN

    Get PDF
    With the development of the Internet, Internet service providers (ISPs) are required to offer revenue-generating and value-added services instead of only providing bandwidth and access services. Virtual Private Network (VPN) is one of the most important value-added services for ISPs. The classical VPN service is provided by implementing layer 2 technologies, either Frame Relay (FR) or Asynchronous Transfer Mode (ATM). With FR or ATM, virtual circuits are created before data delivery. Since the bandwidth and buffers are reserved, the QoS requirements can be naturally guaranteed. In the past few years, layer 3 VPN technologies are widely deployed due to the desirable performance in terms of flexibility, scalability and simplicity. Layer 3 VPNs are built upon IP tunnels, e.g., by using PPTP, L2TP or IPSec. Since IP is best-of-effort in nature, the QoS requirement cannot be guaranteed in layer 3 VPNs. Actually, layer 3 VPN service can only provide secure connectivity, i.e., protecting and authenticating IP packets between gateways or hosts in a VPN. Without doubt, with more applications on voice, audio and video being used in the Internet, the provision of QoS is one of the most important parts of the emerging services provided by ISPs. An intriguing question is: Is it possible to obtain the best of both layer 2 and 3 VPN? Is it possible to provide guaranteed or predictable QoS, as in layer 2 VPNs, while maintaining the flexibility and simplicity in layer 3 VPN? This question is the starting point of this study. The recently proposed hose model for VPN possesses desirable properties in terms of flexibility, scalability and multiplexing gain. However, the classic fair bandwidth allocation schemes and weighted fair queuing schemes raise the issue of low overall utilization in this model. A new fluid model for provider-provisioned virtual private network (PPVPN) is proposed in this dissertation. Based on the proposed model, an idealized fluid bandwidth allocation scheme is developed. This scheme is proven, analytically, to have the following properties: 1) maximize the overall throughput of the VPN without compromising fairness; 2) provide a mechanism that enables the VPN customers to allocate the bandwidth according to their requirements by assigning different weights to different hose flows, and thus obtain the predictable QoS performance; and 3) improve the overall throughput of the ISPs\u27 network. To approximate the idealized fluid scheme in the real world, the 2-dimensional deficit round robin (2-D DRR and 2-D DRR+) schemes are proposed. The integration of the proposed schemes with the best-effort traffic within the framework of virtual-router-based VPN is also investigated. The 2-D DRR and 2-D DER-+ schemes can be extended to multi-dimensional schemes to be employed in those applications which require a hierarchical scheduling architecture. To enhance the scalability, a more scalable non-per-flow-based scheme for output queued switches is developed as well, and the integration of this scheme within the framework of the MPLS VPN and applications for multicasting traffics is discussed. The performance and properties of these schemes are analyzed

    Evaluating the Perfomance of the Modified Dynamic Hose Model for Virtual Private Networks

    Get PDF
    This paper is designed to model a Modified Dynamic Hose Algorithm for data traffic management. The Virtual Private Network (VPN) under study was characterized and the data for transmission was modeled. Then Algorithm for Modified Dynamic Hose Model to handle varying traffic rates was developed and simulated using MATLAB. The results obtained from network characterization shows that variation in window size and packet size affects the throughput in a VPN as an increase in window size from 50kb to 100kb improved the throughput generated from 15 for the Conventional Hose Model to 28.3 for the Modified Dynamic Hose Model resulting in 13.3 throughputs, which translate to 47% improvement. Also variation in window size and packet size affects the throughput in a VPN as an increase in window size from 10kb to 50kb resulted to a maximum throughput of 3.01 for the Conventional Model as against 15 for the Modified Dynamic Hose Model resulting to additional 11.99 or improvement of 79.93%. The Modified Dynamic Hose Model algorithm, unlike the Conventional Hose Model, determines whether to drop a particular packet or to queue it thereby improving the bandwidth utilization, minimize latency (delays) and Virtual Private Network Throughput
    • …
    corecore