421 research outputs found

    SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research

    Full text link
    Network experiments are essential to network-related scientific research (e.g., congestion control, QoS, network topology design, and traffic engineering). However, (re)configuring various topologies on a real testbed is expensive, time-consuming, and error-prone. In this paper, we propose \emph{Software Defined Topology Testbed (SDT)}, a method for constructing a user-defined network topology using a few commodity switches. SDT is low-cost, deployment-friendly, and reconfigurable, which can run multiple sets of experiments under different topologies by simply using different topology configuration files at the controller we designed. We implement a prototype of SDT and conduct numerous experiments. Evaluations show that SDT only introduces at most 2\% extra overhead than full testbeds on multi-hop latency and is far more efficient than software simulators (reducing the evaluation time by up to 2899x). SDT is more cost-effective and scalable than existing Topology Projection (TP) solutions. Further experiments show that SDT can support various network research experiments at a low cost on topics including but not limited to topology design, congestion control, and traffic engineering.Comment: This paper will be published in IEEE CLUSTER 2023. Preview version onl

    What comes after optical-bypass network? A study on optical-computing-enabled network

    Full text link
    A new architectural paradigm, named, optical-computing-enabled network, is proposed as a potential evolution of the currently used optical-bypass framework. The main idea is to leverage the optical computing capabilities performed on transitional lightpaths at intermediate nodes and such proposal reverses the conventional wisdom in optical-bypass network, that is, separating in-transit lightpaths in avoidance of unwanted interference. In optical-computing-enabled network, the optical nodes are therefore upgraded from conventional functions of add-drop and cross-connect to include optical computing / processing capabilities. This is enabled by exploiting the superposition of in-transit lightpaths for computing purposes to achieve greater capacity efficiency. While traditional network design and planning algorithms have been well-developed for optical-bypass framework in which the routing and resource allocation is dedicated to each optical channel (lightpath), more complicated problems arise in optical-computing-enabled architecture as a consequence of intricate interaction between optical channels and hence resulting into the establishment of the so-called integrated / computed lightpaths. This necessitates for a different framework of network design and planning to maximize the impact of optical computing opportunities. In highlighting this critical point, a detailed case study exploiting the optical aggregation operation to re-design the optical core network is investigated in this paper. Numerical results obtained from extensive simulations on the COST239 network are presented to quantify the efficacy of optical-computing-enabled approach versus the conventional optical-bypass-enabled one.Comment: 17 pages, 3 figures, 4 tables; the author's version that has been accepted to Optical Fiber Technology Journal 202

    Scheduling and reconfiguration of interconnection network switches

    Get PDF
    Interconnection networks are important parts of modern computing systems, facilitating communication between a system\u27s components. Switches connecting various nodes of an interconnection network serve to move data in the network. The switch\u27s delay and throughput impact the overall performance of the network and thus the system. Scheduling efficient movement of data through a switch and configuring the switch to realize a schedule are the main themes of this research. We consider various interconnection network switches including (i) crossbar-based switches, (ii) circuit-switched tree switches, and (iii) fat-tree switches. For crossbar-based input-queued switches, a recent result established that logarithmic packet delay is possible. However, this result assumes that packet transmission time through the switch is no less than schedule-generation time. We prove that without this assumption (as is the case in practice) packet delay becomes linear. We also report results of simulations that bear out our result for practical switch sizes and indicate that a fast scheduling algorithm reduces not only packet delay but also buffer size. We also propose a fast mesh-of-trees based distributed switch scheduling (maximal-matching based) algorithm that has polylog complexity. A circuit-switched tree (CST) can serve as an interconnect structure for various computing architectures and models such as the self-reconfigurable gate array and the reconfigurable mesh. A CST is a tree structure with source and destination processing elements as leaves and switches as internal nodes. We design several scheduling and configuration algorithms that distributedly partition a given set of communications into non-conflicting subsets and then establish switch settings and paths on the CST corresponding to the communications. A fat-tree is another widely used interconnection structure in many of today\u27s high-performance clusters. We embed a reconfigurable mesh inside a fat-tree switch to generate efficient connections. We present an R-Mesh-based algorithm for a fat-tree switch that creates buses connecting input and output ports corresponding to various communications using that switch

    Airborne Directional Networking: Topology Control Protocol Design

    Get PDF
    This research identifies and evaluates the impact of several architectural design choices in relation to airborne networking in contested environments related to autonomous topology control. Using simulation, we evaluate topology reconfiguration effectiveness using classical performance metrics for different point-to-point communication architectures. Our attention is focused on the design choices which have the greatest impact on reliability, scalability, and performance. In this work, we discuss the impact of several practical considerations of airborne networking in contested environments related to autonomous topology control modeling. Using simulation, we derive multiple classical performance metrics to evaluate topology reconfiguration effectiveness for different point-to-point communication architecture attributes for the purpose of qualifying protocol design elements

    Efficient Intra-Rack Resource Disaggregation for HPC Using Co-Packaged DWDM Photonics

    Full text link
    The diversity of workload requirements and increasing hardware heterogeneity in emerging high performance computing (HPC) systems motivate resource disaggregation. Resource disaggregation allows compute and memory resources to be allocated individually as required to each workload. However, it is unclear how to efficiently realize this capability and cost-effectively meet the stringent bandwidth and latency requirements of HPC applications. To that end, we describe how modern photonics can be co-designed with modern HPC racks to implement flexible intra-rack resource disaggregation and fully meet the bit error rate (BER) and high escape bandwidth of all chip types in modern HPC racks. Our photonic-based disaggregated rack provides an average application speedup of 11% (46% maximum) for 25 CPU and 61% for 24 GPU benchmarks compared to a similar system that instead uses modern electronic switches for disaggregation. Using observed resource usage from a production system, we estimate that an iso-performance intra-rack disaggregated HPC system using photonics would require 4x fewer memory modules and 2x fewer NICs than a non-disaggregated baseline.Comment: 15 pages, 12 figures, 4 tables. Published in IEEE Cluster 202

    BlueDBM: An Appliance for Big Data Analytics

    Get PDF
    Complex data queries, because of their need for random accesses, have proven to be slow unless all the data can be accommodated in DRAM. There are many domains, such as genomics, geological data and daily twitter feeds where the datasets of interest are 5TB to 20 TB. For such a dataset, one would need a cluster with 100 servers, each with 128GB to 256GBs of DRAM, to accommodate all the data in DRAM. On the other hand, such datasets could be stored easily in the flash memory of a rack-sized cluster. Flash storage has much better random access performance than hard disks, which makes it desirable for analytics workloads. In this paper we present BlueDBM, a new system architecture which has flash-based storage with in-store processing capability and a low-latency high-throughput inter-controller network. We show that BlueDBM outperforms a flash-based system without these features by a factor of 10 for some important applications. While the performance of a ram-cloud system falls sharply even if only 5%~10% of the references are to the secondary storage, this sharp performance degradation is not an issue in BlueDBM. BlueDBM presents an attractive point in the cost-performance trade-off for Big Data analytics.Quanta Computer (Firm)Samsung (Firm)Lincoln Laboratory (PO7000261350)Intel Corporatio

    A Survey on Reconfigurable System-on-Chips

    Get PDF
    The requirements for high performance and low power consumption are becoming more and more inevitable when designing modern embedded systems, especially for the next generation multi-mode multimedia or communication standards. Ultra large-scale integration reconfigurable System-on-Chips (SoCs) have been proposed to achieve not only better performance and lower energy consumption but also higher flexibility and versatility in comparison with the conventional architectures. The unique characteristic of such systems is integration of many types of heterogeneous reconfigurable processing fabrics based on a Network-on-Chip. This paper analyzes and emphasizes the key research trends of the reconfigurable System-on-Chips (SoCs). Firstly, the emerging hardware architecture of SoCs is highlighted. Afterwards, the key issues of designing the reconfigurable SoCs are discussed, with the focus on the challenges when designing reconfigurable hardware fabrics and reconfigurable Network-on-Chips. Finally, some state-of-the-art reconfigurable SoCs are briefly discussed

    A Survey on FPGA-Based Heterogeneous Clusters Architectures

    Get PDF
    In recent years, the most powerful supercomputers have already reached megawatt power consumption levels, an important issue that challenges sustainability and shows the impossibility of maintaining this trend. To this date, the prevalent approach to supercomputing is dominated by CPUs and GPUs. Given their fixed architectures with generic instruction sets, they have been favored with lots of tools and mature workflows which led to mass adoption and further growth. However, reconfigurable hardware such as FPGAs has repeatedly proven that it offers substantial advantages over this supercomputing approach concerning performance and power consumption. In this survey, we review the most relevant works that advanced the field of heterogeneous supercomputing using FPGAs focusing on their architectural characteristics. Each work was divided into three main parts: network, hardware, and software tools. All implementations face challenges that involve all three parts. These dependencies result in compromises that designers must take into account. The advantages and limitations of each approach are discussed and compared in detail. The classification and study of the architectures illustrate the trade-offs of the solutions and help identify open problems and research lines
    corecore