2,623 research outputs found

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Semantics-preserving cosynthesis of cyber-physical systems

    Get PDF

    Energy-Aware Scheduling for Streaming Applications

    Get PDF
    Streaming applications have become increasingly important and widespread,with application domains ranging from embedded devices to server systems.Traditionally, researchers have been focusing on improving the performanceof streaming applications to achieve high throughput and low response time.However, increasingly more attention is being shifted topower/performance trade-offbecause power consumption has become a limiting factor on system designas integrated circuits enter the realm of nanometer technology.This work addresses the problem of scheduling a streaming application(represented by a task graph)with the goal of minimizing its energy consumptionwhile satisfying its two quality of service (QoS) requirements,namely, throughput and response time.The available power management mechanisms are dynamic voltage scaling (DVS),which has been shown to be effective in reducing dynamic power consumption, andvary-on/vary-off, which turns processors on and off to save static power consumption.Scheduling algorithms are proposed for different computing platforms (uniprocessor and multiprocessor systems),different characteristics of workload (deterministic and stochastic workload),and different types of task graphs (singleton and general task graphs).Both continuous and discrete processor power models are considered.The highlights are a unified approach for obtaining optimal (or provably close to optimal)uniprocessor DVS schemes for various DVS strategies anda novel multiprocessor scheduling algorithm that exploits the differencebetween the two QoS requirements to perform processor allocation,task mapping, and task speedscheduling simultaneously
    • …
    corecore