1,557 research outputs found

    Pathways to servers of the future

    Get PDF
    The Special Session on “Pathways to Servers of the Future” outlines a new research program set up at Technische Universität Dresden addressing the increasing energy demand of global internet usage and the resulting ecological impact of it. The program pursues a novel holistic approach that considers hardware as well as software adaptivity to significantly increase energy efficiency, while suitably addressing application demands. The session presents the research challenges and industry perspective

    dReDBox: Materializing a full-stack rack-scale system prototype of a next-generation disaggregated datacenter

    Get PDF
    Current datacenters are based on server machines, whose mainboard and hardware components form the baseline, monolithic building block that the rest of the system software, middleware and application stack are built upon. This leads to the following limitations: (a) resource proportionality of a multi-tray system is bounded by the basic building block (mainboard), (b) resource allocation to processes or virtual machines (VMs) is bounded by the available resources within the boundary of the mainboard, leading to spare resource fragmentation and inefficiencies, and (c) upgrades must be applied to each and every server even when only a specific component needs to be upgraded. The dRedBox project (Disaggregated Recursive Datacentre-in-a-Box) addresses the above limitations, and proposes the next generation, low-power, across form-factor datacenters, departing from the paradigm of the mainboard-as-a-unit and enabling the creation of function-block-as-a-unit. Hardware-level disaggregation and software-defined wiring of resources is supported by a full-fledged Type-1 hypervisor that can execute commodity virtual machines, which communicate over a low-latency and high-throughput software-defined optical network. To evaluate its novel approach, dRedBox will demonstrate application execution in the domains of network functions virtualization, infrastructure analytics, and real-time video surveillance.This work has been supported in part by EU H2020 ICTproject dRedBox, contract #687632.Peer ReviewedPostprint (author's final draft

    Architecture and Advanced Electronics Pathways Toward Highly Adaptive Energy- Efficient Computing

    Get PDF
    With the explosion of the number of compute nodes, the bottleneck of future computing systems lies in the network architecture connecting the nodes. Addressing the bottleneck requires replacing current backplane-based network topologies. We propose to revolutionize computing electronics by realizing embedded optical waveguides for onboard networking and wireless chip-to-chip links at 200-GHz carrier frequency connecting neighboring boards in a rack. The control of novel rate-adaptive optical and mm-wave transceivers needs tight interlinking with the system software for runtime resource management

    Venice: Exploring Server Architectures for Effective Resource Sharing

    Get PDF
    Consolidated server racks are quickly becoming the backbone of IT infrastructure for science, engineering, and business, alike. These servers are still largely built and organized as when they were distributed, individual entities. Given that many fields increasingly rely on analytics of huge datasets, it makes sense to support flexible resource utilization across servers to improve cost-effectiveness and performance. We introduce Venice, a family of data-center server architectures that builds a strong communication substrate as a first-class resource for server chips. Venice provides a diverse set of resource-joining mechanisms that enables user programs to efficiently leverage non-local resources. To better understand the implications of design decisions about system support for resource sharing we have constructed a hardware prototype that allows us to more accurately measure end-to-end performance of at-scale applications and to explore tradeoffs among performance, power, and resource-sharing transparency. We present results from our initial studies analyzing these tradeoffs when sharing memory, accelerators, or NICs. We find that it is particularly important to reduce or hide latency, that data-sharing access patterns should match the features of the communication channels employed, and that inter-channel collaboration can be exploited for better performance

    An Interconnection Architecture for Seamless Inter and Intra-Chip Communication Using Wireless Links

    Get PDF
    As semiconductor technologies continues to scale, more and more cores are being integrated on the same multicore chip. This increase in complexity poses the challenge of efficient data transfer between these cores. Several on-chip network architectures are proposed to improve the design flexibility and communication efficiency of such multicore chips. However, in a larger system consisting of several multicore chips across a board or in a System-in-Package (SiP), the performance is limited by the communication among and within these chips. Such systems, most commonly found within computing modules in typical data center nodes or server racks, are in dire need of an efficient interconnection architecture. Conventional interchip communication using wireline links involve routing the data from the internal cores to the peripheral I/O ports, travelling over the interchip channels to the destination chip, and finally getting routed from the I/O to the internal cores there. This multihop communication increases latency and energy consumption while decreasing data bandwidth in a multichip system. Furthermore, the intrachip and interchip communication architectures are separately designed to maximize design flexibility. Jointly designing them could, however, improve the communication efficiency significantly and yield better solutions. Previous attempts at this include an all-photonic approach that provides a unified inter/intra-chip optical network, based on recent progress in nano-photonic technologies. Works on wireless inter-chip interconnects successfully yielded better results than their wired counterparts, but their scopes were limited to establishing a single wireless connection between two chips rather than a communication architecture for a system as a whole. In this thesis, the design of a seamless hybrid wired and wireless interconnection network for multichip systems in a package is proposed. The design utilizes on-chip wireless transceivers with dimensions spanning up to tens of centimeters. It manages to seamlessly bind both intrachip and interchip communication architectures and enables direct chip-to-chip communication between the internal cores. It is shown through cycle accurate simulations that the proposed design increases the bandwidth and reduces the energy consumption when compared to the state-of-the-art wireline I/O based multichip communications

    Characterizing opportunities for short reach optical interconnect adoption : a market survey and total cost of ownership model approach

    Get PDF
    Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 134-139).Over the past decade, the demand for digital information has increased dramatically with the rising use of the Internet and various types of multimedia data - text, audio, graphics, video, and voice. As a consequence, the technologies that connect and transport data have become critically important. Available interconnect technologies are broadly organized into two categories: electrical and optical. Although many digital systems use electrical interconnects, optical interconnects are becoming an attractive alternative as electrical connection has become increasingly difficult in terms of cost and performance. However, the transition from electrical to optical interconnects across multiple markets could still be hampered by its higher cost relative to interconnects in the mid-term. Thus, this work seeks to shed light on the following question: "What additional characteristics are useful to evaluate the attractiveness of optical interconnects in emerging markets?" This thesis seeks to explore and answer this question in three parts. The first part of the thesis attempts to gauge the opportunities and barriers to optical interconnect adoption in emerging markets through an analysis of first phase interviews with professionals working in the datacom, automobile, consumer hand-held device industries. Initial review of the response set shows that of the five initial emerging markets for optical interconnect, datacom, specifically high-performance computing (HPC), has the greatest potential for increased optical interconnect adoption in the near future. To further explore the environment for optical interconnects in the HPC, a second, more detailed questionnaire was distributed to a limited number of interviewees. In response to this interview, some respondents noted that several metrics other than cost and performance, particularly power consumption, as being "very important" when deciding which technology to adopt. The second part of the thesis is primarily concerned with investigating further the influence that power and performance concerns have on optical interconnect adoption in HPC data centers. Specifically, this part of the thesis seeks to explore whether power concerns in data centers could lead to increased adoption of optical interconnects. To that end, a cost model of an HPC data center has been developed to identify the possible economic impacts that the adoption of optical interconnect technologies would have in a power-driven scenario. The third part of this thesis presents a set of policy recommendations based on the results from the data center cost model.by Johnathan Jake Lindsey III.S.M.in Technology and Polic

    Characterization and optimization of network traffic in cortical simulation

    Get PDF
    Considering the great variety of obstacles the Exascale systems have to face in the next future, a deeper attention will be given in this thesis to the interconnect and the power consumption. The data movement challenge involves the whole hierarchical organization of components in HPC systems — i.e. registers, cache, memory, disks. Running scientific applications needs to provide the most effective methods of data transport among the levels of hierarchy. On current petaflop systems, memory access at all the levels is the limiting factor in almost all applications. This drives the requirement for an interconnect achieving adequate rates of data transfer, or throughput, and reducing time delays, or latency, between the levels. Power consumption is identified as the largest hardware research challenge. The annual power cost to operate the system would be above 2.5 B$ per year for an Exascale system using current technology. The research for alternative power-efficient computing device is mandatory for the procurement of the future HPC systems. In this thesis, a preliminary approach will be offered to the critical process of co-design. Co-desing is defined as the simultaneos design of both hardware and software, to implement a desired function. This process both integrates all components of the Exascale initiative and illuminates the trade-offs that must be made within this complex undertaking
    • …
    corecore