506 research outputs found

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Designing multihop wireless backhaul networks with delay guarantees

    Get PDF
    Abstract — As wireless access technologies improve in data rates, the problem focus is shifting towards providing adequate backhaul from the wireless access points to the Internet. Existing wired backhaul technologies such as copper wires running at DSL, T1, or T3 speeds can be expensive to install or lease, and are becoming a performance bottleneck as wireless access speeds increase. Longhaul, non-line-of-sight wireless technologies such as WiMAX (802.16d) hold the promise of enabling a high speed wireless backhaul as a cost-effective alternative. However, the biggest challenge in building a wireless backhaul is achieving guaranteed performance (throughput and delay) that is typically provided by a wired backhaul. This paper explores the problem of efficiently designing a multihop wireless backhaul to connect multiple wireless access points to a wired gateway. In particular, we provide a generalized link activation framework for scheduling packets over this wireless backhaul, such that any existing wireline scheduling policy can be implemented locally at each node of the wireless backhaul. We also present techniques for determining good interference-free routes within our scheduling framework, given the link rates and cross-link interference information. When a multihop wireline scheduler with worst case delay bounds (such as WFQ or Coordinated EDF) is implemented over the wireless backhaul, we show that our scheduling and routing framework guarantees approximately twice the delay of the corresponding wireline topology. Finally, we present simulation results to demonstrate the low delays achieved using our framework. I

    Large-Scale Distributed Coalition Formation

    Get PDF
    The CyberCraft project is an effort to construct a large scale Distributed Multi-Agent System (DMAS) to provide autonomous Cyberspace defense and mission assurance for the DoD. It employs a small but flexible agent structure that is dynamically reconfigurable to accommodate new tasks and policies. This document describes research into developing protocols and algorithms to ensure continued mission execution in a system of one million or more agents, focusing on protocols for coalition formation and Command and Control. It begins by building large-scale routing algorithms for a Hierarchical Peer to Peer structured overlay network, called Resource-Clustered Chord (RC-Chord). RC-Chord introduces the ability to efficiently locate agents by resources that agents possess. Combined with a task model defined for CyberCraft, this technology feeds into an algorithm that constructs task coalitions in a large-scale DMAS. Experiments reveal the flexibility and effectiveness of these concepts for achieving maximum work throughput in a simulated CyberCraft environment

    Modeling and Controlling of an Integrated Distribution Supply Chain: Simulation Based Shipment Consolidation Heuristics

    Get PDF
    Increasing competition due to market globalization, product diversity and technological breakthroughs stimulates independent firms to collaborate in a supply chain that allows them to gain mutual benefits. This requires collective knowledge of the coordination and integration mode, including the ability to synchronize interdependent processes, to integrate information systems and to cope with distributed learning. The Integrated Supply Chain Problem (ISCP) is concerned with coordinating the supply chain tires from supplier, production, inventory and distribution delivery operations to meet customer demand with an objective to minimize the cost and maximize the supply chain service levels. In order to achieve high performance, supply chain functions must operate in an integrated and coordinated manner. Several challenging problems associated with integrated supply chain design are: (1) how to model and coordinate the supply chain business processes; (2) how to analyze the performance of an integrated supply chain network; and (3) how to evaluate the dynamic of the supply chain to obtain a comprehensive understanding of decision-making issues related to supply network configurations. These problems are most representative in the supply chain theory’s research and applications. A particular real life supply chain considered in this study involves multi echelon and multi level distribution supply chains, each echelon with its own inventory capacities and multi product types and classes. Optimally solving such an integrated problem is in general not easy due to its combinatorial nature, especially in a real life situation where a multitude of aspects and functions should be taken into consideration. In this dissertation, the simulation based heuristics solution method was implemented to effectively solve this integrated problem. A complex real life simulation model for managing the flow of material, transportation, and information considering multi products multi echelon inventory levels and capacities in upstream and downstream supply chain locations supported by an efficient Distribution Requirements Planning model (DRP) was modeled and developed named (LDNST) involving several sequential optimization phases. In calibration phase (0), the allocation of facilities to customers in the supply chain utilizing Add / Drop heuristics were implemented, that results in minimizing total distance traveled and maximizing the covering percentage. Several essential distribution strategies such as order fulfillment policy and order picking principle were defined in this phase. The results obtained in this phase were considered in further optimization solutions. The transportation function was modelled on pair to pair shipments in which no vehicle routing decision was considered, such an assumption generates two types of transportation trips, the first being Full Truck Load trips (FTL) and the second type being Less Truck Load trips (LTL). Three integrated shipment consolidation heuristics were developed and integrated into the developed simulation model to handle the potential inefficiency of low utilization and high transportation cost incurred by the LTL. The first consolidation heuristic considers a pure pull replenishment algorithm, the second is based on product clustering replenishments with a vendor managed inventory concept, and the last heuristic integrates the vendor managed inventory with advanced demand information to generate a new hybrid replenishment strategy. The main advantage of the latter strategy, over other approaches, is its ability to simultaneously optimize a lot of integrated and interrelated decisions for example, on the inventory and transportation operations without considering additional safety stock to improve the supply chain service levels. Eight product inventory allocation and distribution strategies considering different safety stock levels were designed and established to be considered as main benchmark experiments examined against the above developed replenishment strategies; appropriate selected supply chain performance measures were collected from the simulation results to distinguish any trading off between the proposed distribution strategies. Three supply chain network configurations were proposed: the first was a multi-echelon distribution system with an installation stock reorder policy; the second proposed configuration was Transshipment Point (TP) with a modified (s,S) inventory; and the last considered configuration was a Sub-TP, a special case from the second configuration. The results show that, depending on the structure of multi-echelon distribution systems and the service levels targets, both the echelon location with installation stock policy and advanced demand information replenishment strategy may be advantageous, and the impressive results and service level improvements bear this out. Considering the complexity of modeling the real life supply chain, the results obtained in this thesis reveal that there are significant differences in performance measures, such as activity based costs and network service levels. A supply chain network example is employed to substantiate the effectiveness of the proposed methodologies and algorithms

    Airborne Directional Networking: Topology Control Protocol Design

    Get PDF
    This research identifies and evaluates the impact of several architectural design choices in relation to airborne networking in contested environments related to autonomous topology control. Using simulation, we evaluate topology reconfiguration effectiveness using classical performance metrics for different point-to-point communication architectures. Our attention is focused on the design choices which have the greatest impact on reliability, scalability, and performance. In this work, we discuss the impact of several practical considerations of airborne networking in contested environments related to autonomous topology control modeling. Using simulation, we derive multiple classical performance metrics to evaluate topology reconfiguration effectiveness for different point-to-point communication architecture attributes for the purpose of qualifying protocol design elements

    Particle swarm optimization for routing and wavelength assignment in next generation WDM networks.

    Get PDF
    PhDAll-optical Wave Division Multiplexed (WDM) networking is a promising technology for long-haul backbone and large metropolitan optical networks in order to meet the non-diminishing bandwidth demands of future applications and services. Examples could include archival and recovery of data to/from Storage Area Networks (i.e. for banks), High bandwidth medical imaging (for remote operations), High Definition (HD) digital broadcast and streaming over the Internet, distributed orchestrated computing, and peak-demand short-term connectivity for Access Network providers and wireless network operators for backhaul surges. One desirable feature is fast and automatic provisioning. Connection (lightpath) provisioning in optically switched networks requires both route computation and a single wavelength to be assigned for the lightpath. This is called Routing and Wavelength Assignment (RWA). RWA can be classified as static RWA and dynamic RWA. Static RWA is an NP-hard (non-polynomial time hard) optimisation task. Dynamic RWA is even more challenging as connection requests arrive dynamically, on-the-fly and have random connection holding times. Traditionally, global-optimum mathematical search schemes like integer linear programming and graph colouring are used to find an optimal solution for NP-hard problems. However such schemes become unusable for connection provisioning in a dynamic environment, due to the computational complexity and time required to undertake the search. To perform dynamic provisioning, different heuristic and stochastic techniques are used. Particle Swarm Optimisation (PSO) is a population-based global optimisation scheme that belongs to the class of evolutionary search algorithms and has successfully been used to solve many NP-hard optimisation problems in both static and dynamic environments. In this thesis, a novel PSO based scheme is proposed to solve the static RWA case, which can achieve optimal/near-optimal solution. In order to reduce the risk of premature convergence of the swarm and to avoid selecting local optima, a search scheme is proposed to solve the static RWA, based on the position of swarm‘s global best particle and personal best position of each particle. To solve dynamic RWA problem, a PSO based scheme is proposed which can provision a connection within a fraction of a second. This feature is crucial to provisioning services like bandwidth on demand connectivity. To improve the convergence speed of the swarm towards an optimal/near-optimal solution, a novel chaotic factor is introduced into the PSO algorithm, i.e. CPSO, which helps the swarm reach a relatively good solution in fewer iterations. Experimental results for PSO/CPSO based dynamic RWA algorithms show that the proposed schemes perform better compared to other evolutionary techniques like genetic algorithms, ant colony optimization. This is both in terms of quality of solution and computation time. The proposed schemes also show significant improvements in blocking probability performance compared to traditional dynamic RWA schemes like SP-FF and SP-MU algorithms

    cISP: A Speed-of-Light Internet Service Provider

    Full text link
    Low latency is a requirement for a variety of interactive network applications. The Internet, however, is not optimized for latency. We thus explore the design of cost-effective wide-area networks that move data over paths very close to great-circle paths, at speeds very close to the speed of light in vacuum. Our cISP design augments the Internet's fiber with free-space wireless connectivity. cISP addresses the fundamental challenge of simultaneously providing low latency and scalable bandwidth, while accounting for numerous practical factors ranging from transmission tower availability to packet queuing. We show that instantiations of cISP across the contiguous United States and Europe would achieve mean latencies within 5% of that achievable using great-circle paths at the speed of light, over medium and long distances. Further, we estimate that the economic value from such networks would substantially exceed their expense

    Paul Baran, Network Theory, and the Past, Present, and Future of Internet

    Get PDF
    Paul Baran’s seminal 1964 article “On Distributed Communications Networks” that first proposed packet switching also advanced an underappreciated vision of network architecture: a lattice-like, distributed network, in which each node of the Internet would be homogeneous and equal in status to all other nodes. Scholars who have subsequently embraced the concept of a lattice-like network approach have largely overlooked the extent to which it is both inconsistent with network theory (associated with the work of Duncan Watts and Albert-László Barabási), which emphasizes the importance of short cuts and hubs in enabling networks to scale, and the actual way, the Internet initially deployed, which relied on a three-tiered, hierarchical architecture that was actually what Baran called a decentralized network. However, empirical studies reveal that the Internet’s architecture is changing: it is in the process of becoming flatter and less hierarchical, as large content providers build extensive wide area networks and undersea cables to connect directly to last-mile networks. This change is making the network more centralized rather than becoming more distributed. As a result, this article suggests that the standard reference model that places backbones at the center of the architecture should be replaced with a radically different vision: a stack of centralized star networks, each centered on one of the leading content providers
    • 

    corecore