9,146 research outputs found

    Multi-Granular Optical Cross-Connect: Design, Analysis, and Demonstration

    Get PDF
    A fundamental issue in all-optical switching is to offer efficient and cost-effective transport services for a wide range of bandwidth granularities. This paper presents multi-granular optical cross-connect (MG-OXC) architectures that combine slow (ms regime) and fast (ns regime) switch elements, in order to support optical circuit switching (OCS), optical burst switching (OBS), and even optical packet switching (OPS). The MG-OXC architectures are designed to provide a cost-effective approach, while offering the flexibility and reconfigurability to deal with dynamic requirements of different applications. All proposed MG-OXC designs are analyzed and compared in terms of dimensionality, flexibility/reconfigurability, and scalability. Furthermore, node level simulations are conducted to evaluate the performance of MG-OXCs under different traffic regimes. Finally, the feasibility of the proposed architectures is demonstrated on an application-aware, multi-bit-rate (10 and 40 Gbps), end-to-end OBS testbed

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    Get PDF
    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system

    Future Energy Efficient Data Centers With Disaggregated Servers

    Get PDF
    The popularity of the Internet and the demand for 24/7 services uptime is driving system performance and reliability requirements to levels that today's data centers can no longer support. This paper examines the traditional monolithic conventional server (CS) design and compares it to a new design paradigm: the disaggregated server (DS) data center design. The DS design arranges data centers resources in physical pools, such as processing, memory, and IO module pools, rather than packing each subset of such resources into a single server box. In this paper, we study energy efficient resource provisioning and virtual machine (VM) allocation in DS-based data centers compared to CS-based data centers. First, we present our new design for the photonic DS-based data center architecture, supplemented with a complete description of the architectural components. Second, we develop a mixed integer linear programming (MILP) model to optimize VM allocation for the DS-based data center, including the data center communication fabric power consumption. Our results indicate that, in DS data centers, the optimum allocation of pooled resources and their communication power yields up to 42% average savings in total power consumption when compared with the CS approach. Due to the MILP high computational complexity, we developed an energy efficient resource provisioning heuristic for DS with communication fabric (EERP-DSCF), based on the MILP model insights, with comparable power efficiency to the MILP model. With EERP-DSCF, we can extend the number of served VMs, where the MILP model scalability for a large number of VMs is challenging. Furthermore, we assess the energy efficiency of the DS design under stringent conditions by increasing the CPU to memory traffic and by including high noncommunication power consumption to determine the conditions at which the DS and CS designs become comparable in power consumption. Finally, we present a complete analysis of the communication patterns in our new DS design and some recommendations for design and implementation challenges

    Many is beautiful : commoditization as a source of disruptive innovation

    Get PDF
    Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2003.Includes bibliographical references (leaves 44-45).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.The expression "disruptive technology" is now firmly embedded in the modern business lexicon. The mental model summarized by this concise phrase has great explanatory power for ex-post analysis of many revolutionary changes in business. Unfortunately, this paradigm can rarely be applied prescriptively. The classic formulation of a "disruptive technology" sheds little light on potential sources of innovation. This thesis seeks to extend this analysis by suggesting that many important disruptive technologies arise from commodities. The sudden availability of a high performance factor input at a low price often enables innovation in adjacent market segments. The thesis suggests main five reasons that commodities spur innovation: ** The emergence of a commodity collapses competition to the single dimension of price. Sudden changes in factor prices create new opportunities for supply driven innovation. Low prices enable innovators to substitute quantity for quality. ** The price / performance curve of a commodity creates an attractor that promotes demand aggregation. ** Commodities emerge after the establishment of a dominant design. Commodities have defined and stable interfaces. Well developed tool sets and experienced developer communities are available to work with commodities, decreasing the price of experimentation. ** Distributed architectures based on large number of simple, redundant components offer more predictable performance. Systems based on a small number of high performance components will have a higher standard deviation for uptime than high granularity systems based on large numbers of low power components. ** Distributed architectures are much more flexible than low granularity systems. Large integrated facilities often provide cost advantages when operating at the Minimum Efficient Scale of production. However, distributed architectures that can efficiently change production levels over time may be a superior solution based on the ability to adapt to changing market demand patterns. The evolution of third generation bus architectures in personal computers provides a comprehensive example of commodity based disruption, incorporating all five forces.by Richard Ellert Willey.S.M.M.O.T
    • …
    corecore