8 research outputs found

    Brief Announcement: Minimizing Congestion in Hybrid Demand-Aware Network Topologies

    Get PDF
    Emerging reconfigurable optical communication technologies enable demand-aware networks: networks whose static topology can be enhanced with demand-aware links optimized towards the traffic pattern the network serves. This paper studies the algorithmic problem of how to jointly optimize the topology and the routing in such demand-aware networks, to minimize congestion. We investigate this problem along two dimensions: (1) whether flows are splittable or unsplittable, and (2) whether routing on the hybrid topology is segregated or not, i.e., whether or not flows either have to use exclusively either the static network or the demand-aware connections. For splittable and segregated routing, we show that the problem is 2-approximable in general, but APX-hard even for uniform demands induced by a bipartite demand graph. For unsplittable and segregated routing, we show an upper bound of O(log m/ log log m) and a lower bound of ?(log m/ log log m) for polynomial-time approximation algorithms, where m is the number of static links. Under splittable (resp., unsplittable) and non-segregated routing, even for demands of a single source (resp., destination), the problem cannot be approximated better than ?(c_{max}/c_{min}) unless P=NP, where c_{max} (resp., c_{min}) denotes the maximum (resp., minimum) capacity. It is still NP-hard for uniform capacities, but can be solved efficiently for a single commodity and uniform capacities

    Space Shuffle: A Scalable, Flexible, and High-Bandwidth Data Center Network

    Full text link
    Data center applications require the network to be scalable and bandwidth-rich. Current data center network architectures often use rigid topologies to increase network bandwidth. A major limitation is that they can hardly support incremental network growth. Recent work proposes to use random interconnects to provide growth flexibility. However routing on a random topology suffers from control and data plane scalability problems, because routing decisions require global information and forwarding state cannot be aggregated. In this paper we design a novel flexible data center network architecture, Space Shuffle (S2), which applies greedy routing on multiple ring spaces to achieve high-throughput, scalability, and flexibility. The proposed greedy routing protocol of S2 effectively exploits the path diversity of densely connected topologies and enables key-based routing. Extensive experimental studies show that S2 provides high bisectional bandwidth and throughput, near-optimal routing path lengths, extremely small forwarding state, fairness among concurrent data flows, and resiliency to network failures

    Measuring and Understanding Throughput of Network Topologies

    Full text link
    High throughput is of particular interest in data center and HPC networks. Although myriad network topologies have been proposed, a broad head-to-head comparison across topologies and across traffic patterns is absent, and the right way to compare worst-case throughput performance is a subtle problem. In this paper, we develop a framework to benchmark the throughput of network topologies, using a two-pronged approach. First, we study performance on a variety of synthetic and experimentally-measured traffic matrices (TMs). Second, we show how to measure worst-case throughput by generating a near-worst-case TM for any given topology. We apply the framework to study the performance of these TMs in a wide range of network topologies, revealing insights into the performance of topologies with scaling, robustness of performance across TMs, and the effect of scattered workload placement. Our evaluation code is freely available

    The Effect of Network Topology on Credit Network Throughput

    Full text link
    Credit networks rely on decentralized, pairwise trust relationships (channels) to exchange money or goods. Credit networks arise naturally in many financial systems, including the recent construct of payment channel networks in blockchain systems. An important performance metric for these networks is their transaction throughput. However, predicting the throughput of a credit network is nontrivial. Unlike traditional communication channels, credit channels can become imbalanced; they are unable to support more transactions in a given direction once the credit limit has been reached. This potential for imbalance creates a complex dependency between a network's throughput and its topology, path choices, and the credit balances (state) on every channel. Even worse, certain combinations of these factors can lead the credit network to deadlocked states where no transactions can make progress. In this paper, we study the relationship between the throughput of a credit network and its topology and credit state. We show that the presence of deadlocks completely characterizes a network's throughput sensitivity to different credit states. Although we show that identifying deadlocks in an arbitrary topology is NP-hard, we propose a peeling algorithm inspired by decoding algorithms for erasure codes that upper bounds the severity of the deadlock. We use the peeling algorithm as a tool to compare the performance of different topologies as well as to aid in the synthesis of topologies robust to deadlocks

    Designing data center networks for high throughput

    Get PDF
    Data centers with tens of thousands of servers now support popular Internet services, scientific research, as well as industrial applications. The network is the foundation of such facilities, giving the large server pool the ability to work together on these applications. The network needs to provide high throughput between servers to ensure that computations are not slowed down by network bottlenecks, with servers waiting on data from other servers. This work address two broad, related questions about high-throughput data center network design: (a) how do we measure and benchmark various network designs for throughput? and (b) how do we design such networks for near-optimal throughput? The problem of designing high-throughput networks has received a lot of attention, with multiple interesting architectures being proposed every year. However, there is no clarity on how one should benchmark these networks and how they compare to each other. In fact, this work shows that commonly used measurement approaches, in particular, cut-metrics like bisection bandwidth, do not predict throughput accurately. In contrast, we directly evaluate the throughput of networks on both uniform and (heretofore unknown) nearly-worst-case traffic matrices, and include here a comparison of 10 networks using this approach. Further, prior work has not addressed a fundamental question: how far are we from throughput-optimal design? In this work, we propose the first upper bound on network throughput for any topology with identical switches. Although designing optimal topologies is infeasible, we demonstrate that random graphs achieve throughput surprisingly close to this bound -- within a few percent at the scale of a few thousand servers for uniform traffic. Our approach also addresses important practical concerns in the design of data center networks, such as incremental expansion and heterogeneous design – as more and varied equipment is added to a data center over the years in response to evolving needs, how do we best accommodate such equipment? Our networks can achieve the same incremental growth at 40% of the expense such growth would incur with past techniques for Clos networks. Further, our approach to designing heterogeneous topologies (i.e., where all the network switches are not identical) achieves 43% higher throughput than a comparable VL2 topology, a heterogeneous network already deployed in Microsoft’s data centers. We acknowledge that the use of random graphs also poses challenges, particularly with regards to efficient routing and physical cabling. We thus present here high-efficiency routing and cabling schemes for such networks as well

    Resource management for extreme scale high performance computing systems in the presence of failures

    Get PDF
    2018 Summer.Includes bibliographical references.High performance computing (HPC) systems, such as data centers and supercomputers, coordinate the execution of large-scale computation of applications over tens or hundreds of thousands of multicore processors. Unfortunately, as the size of HPC systems continues to grow towards exascale complexities, these systems experience an exponential growth in the number of failures occurring in the system. These failures reduce performance and increase energy use, reducing the efficiency and effectiveness of emerging extreme-scale HPC systems. Applications executing in parallel on individual multicore processors also suffer from decreased performance and increased energy use as a result of applications being forced to share resources, in particular, the contention from multiple application threads sharing the last-level cache causes performance degradation. These challenges make it increasingly important to characterize and optimize the performance and behavior of applications that execute in these systems. To address these challenges, in this dissertation we propose a framework for intelligently characterizing and managing extreme-scale HPC system resources. We devise various techniques to mitigate the negative effects of failures and resource contention in HPC systems. In particular, we develop new HPC resource management techniques for intelligently utilizing system resources through the (a) optimal scheduling of applications to HPC nodes and (b) the optimal configuration of fault resilience protocols. These resource management techniques employ information obtained from historical analysis as well as theoretical and machine learning methods for predictions. We use these data to characterize system performance, energy use, and application behavior when operating under the uncertainty of performance degradation from both system failures and resource contention. We investigate how to better characterize and model the negative effects from system failures as well as application co-location on large-scale HPC computing systems. Our analysis of application and system behavior also investigates: the interrelated effects of network usage of applications and fault resilience protocols; checkpoint interval selection and its sensitivity to system parameters for various checkpoint-based fault resilience protocols; and performance comparisons of various promising strategies for fault resilience in exascale-sized systems
    corecore