11 research outputs found

    Theoretical Analysis for Scale-down-Aware Service Allocation in Cloud Storage Systems

    Get PDF
    Servcie allocation algorithms have been drawing popularity in cloudcomputing research community. There has been lots of research onimprovingservice allocation schemes for high utilization, latency reductionand VM migration enfficient, but few work focus on energy consumptionaffected by instance placement in data centers. In this paper we propose an algorithm in which to maximize the number of freed-up machines in data centers, machines that host purely scale-down instances, which are reuiqred to be shut down for energy saving at certain points of time. We intuitively employ a probability partitioning mechanism to schedule services such that the goal of the maximization can be achieved. Furthermore we perform a set of experiments to test the partitioning rules, which show that the proposed algorithms can dynamically increase the number of freed-up machines substantially.DOI:http://dx.doi.org/10.11591/ijece.v3i1.179

    Reducing Response Time with Preheated Caches

    Get PDF
    CPU performance is increasingly limited by thermal dissipation, and soon aggressive power management will be beneficial for performance. Especially, temporarily idle parts of the chip (including the caches) should be power-gated in order to reduce leakage power. Current CPUs already lose their cache state whenever the CPU is idle for extended periods of time, which causes a performance loss when execution is resumed, due to the high number of cache misses when the working set is fetched from external memory. In a server system, the first network request during this period suffers from increased response time. We present a technique to reduce this overhead by preheating the caches in advance before the network request arrives at the server: Our design predicts the working set of the server application by analyzing the cache contents after similar requests have been processed. As soon as an estimate of the working set is available, a predictable network architecture starts to announce future incoming network packets to the server, which then loads the predicted working set into the cache. Our experiments show that, if this preheating step is complete when the network packet arrives, the response time overhead is reduced by an average of 80%

    Flowtune: Flowlet Control for Datacenter Networks

    Get PDF
    Rapid convergence to a desired allocation of network resources to endpoint traffic has been a long-standing challenge for packet-switched networks. The reason for this is that congestion control decisions are distributed across the endpoints, which vary their offered load in response to changes in application demand and network feedback on a packet-by-packet basis. We propose a different approach for datacenter networks, flowlet control, in which congestion control decisions are made at the granularity of a flowlet, not a packet. With flowlet control, allocations have to change only when flowlets arrive or leave. We have implemented this idea in a system called Flowtune using a centralized allocator that receives flowlet start and end notifications from endpoints. The allocator computes optimal rates using a new, fast method for network utility maximization, and updates endpoint congestion-control parameters. Experiments show that Flowtune outperforms DCTCP, pFabric, sfqCoDel, and XCP on tail packet delays in various settings, converging to optimal rates within a few packets rather than over several RTTs. Our implementation of Flowtune handles 10.4x more throughput per core and scales to 8x more cores than Fastpass, for an 83-fold throughput gain

    Fastpass: A Centralized “Zero-Queue” Datacenter Network

    Get PDF
    An ideal datacenter network should provide several properties, including low median and tail latency, high utilization (throughput), fair allocation of network resources between users or applications, deadline-aware scheduling, and congestion (loss) avoidance. Current datacenter networks inherit the principles that went into the design of the Internet, where packet transmission and path selection decisions are distributed among the endpoints and routers. Instead, we propose that each sender should delegate control—to a centralized arbiter—of when each packet should be transmitted and what path it should follow. This paper describes Fastpass, a datacenter network architecture built using this principle. Fastpass incorporates two fast algorithms: the first determines the time at which each packet should be transmitted, while the second determines the path to use for that packet. In addition, Fastpass uses an efficient protocol between the endpoints and the arbiter and an arbiter replication strategy for fault-tolerant failover. We deployed and evaluated Fastpass in a portion of Facebook’s datacenter network. Our results show that Fastpass achieves high throughput comparable to current networks at a 240 reduction is queue lengths (4.35 Mbytes reducing to 18 Kbytes), achieves much fairer and consistent flow throughputs than the baseline TCP (5200 reduction in the standard deviation of per-flow throughput with five concurrent connections), scalability from 1 to 8 cores in the arbiter implementation with the ability to schedule 2.21 Terabits/s of traffic in software on eight cores, and a 2.5 reduction in the number of TCP retransmissions in a latency-sensitive service at Facebook.National Science Foundation (U.S.) (grant IIS-1065219)Irwin Mark Jacobs and Joan Klein Jacobs Presidential FellowshipHertz Foundation (Fellowship

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Packet scheduling algorithms for a software-defined manufacturing environment

    Get PDF
    With the vision of Industry 4.0, Internet of things (IoT) and Internet of Services (IoS) are making their way to the modern manufacturing systems and industrial automation. As a consequence, modern day manufacturing systems need wider product variation and customization to meet the customer's demands and survive in the competitive markets. Traditional, dedicated systems like assembly lines cannot adapt the rapidly changing requirement of today's manufacturing industries. A flexible and highly scalable infrastructure is needed to support such systems. However, most of the applications in manufacturing systems require strict QoS guarantees. For instance, time-sensitive networks like in industrial automation and smart factories need hard real-time guarantees. Deterministic networks with bounded delay and jitter are essential requirement for such systems. To support such systems, non-deterministic queueing delay has to be eliminated from the network. To this end, we present Time-Sensitive Software-Defined Networks (TSSDN) with a logically centralized controller which computes transmission schedules based on the global view of the network. SDN control logic computes optimized transmission schedules for the end hosts to avoid in network queueing delay. To compute transmission schedules, we present Integer Linear Programming and Routing and Scheduling Algorithms with heuristics that schedule and route unicast and multicast flows. Our evaluations show that it is possible to compute near optimal transmission schedules for TSSDN and bound network delays and jitter

    Ensuring compliance with data privacy and usage policies in online services

    Get PDF
    Online services collect and process a variety of sensitive personal data that is subject to complex privacy and usage policies. Complying with the policies is critical, often legally binding for service providers, but it is challenging as applications are prone to many disclosure threats. We present two compliance systems, Qapla and Pacer, that ensure efficient policy compliance in the face of direct and side-channel disclosures, respectively. Qapla prevents direct disclosures in database-backed applications (e.g., personnel management systems), which are subject to complex access control, data linking, and aggregation policies. Conventional methods inline policy checks with application code. Qapla instead specifies policies directly on the database and enforces them in a database adapter, thus separating compliance from the application code. Pacer prevents network side-channel leaks in cloud applications. A tenant’s secrets may leak via its network traffic shape, which can be observed at shared network links (e.g., network cards, switches). Pacer implements a cloaked tunnel abstraction, which hides secret-dependent variation in tenant’s traffic shape, but allows variations based on non-secret information, enabling secure and efficient use of network resources in the cloud. Both systems require modest development efforts, and incur moderate performance overheads, thus demonstrating their usability.Onlinedienste sammeln und verarbeiten eine Vielzahl sensibler persönlicher Daten, die komplexen Datenschutzrichtlinien unterliegen. Die Einhaltung dieser Richtlinien ist häufig rechtlich bindend für Dienstanbieter und gleichzeitig eine Herausforderung, da Fehler in Anwendungsprogrammen zu einer unabsichtlichen Offenlegung führen können. Wir präsentieren zwei Compliance-Systeme, Qapla und Pacer, die Richtlinien effizient einhalten und gegen direkte und indirekte Offenlegungen durch Seitenkanäle schützen. Qapla verhindert direkte Offenlegungen in datenbankgestützten Anwendungen. Herkömmliche Methoden binden Richtlinienprüfungen in Anwendungscode ein. Stattdessen gibt Qapla Richtlinien direkt in der Datenbank an und setzt sie in einem Datenbankadapter durch. Die Konformität ist somit vom Anwendungscode getrennt. Pacer verhindert Netzwerkseitenkanaloffenlegungen in Cloud-Anwendungen. Geheimnisse eines Nutzers können über die Form des Netzwerkverkehr offengelegt werden, die bei gemeinsam genutzten Netzwerkelementen (z. B. Netzwerkkarten, Switches) beobachtet werden kann. Pacer implementiert eine Tunnelabstraktion, die Geheimnisse im Netzwerkverkehr des Nutzers verbirgt, jedoch Variationen basier- end auf nicht geheimen Informationen zulässt und eine sichere und effiziente Nutzung der Netzwerkressourcen in der Cloud ermöglicht. Beide Systeme erfordern geringen Entwicklungsaufwand und verursachen einen moderaten Leistungsaufwand, wodurch ihre Nützlichkeit demonstriert wird

    Scheduling & routing time-triggered traffic in time-sensitive networks

    Get PDF
    The application of recent advances in computing, cognitive and networking technologies in manufacturing has triggered the so-called fourth industrial revolution, also referred to as Industry 4.0. Smart and flexible manufacturing systems are being conceived as a part of the Industry 4.0 initiative to meet the challenging requirements of the modern day manufacturers, e.g., production batch sizes of one. The information and communication technologies (ICT) infrastructure in such smart factories is expected to host heterogeneous applications ranging from the time-sensitive cyber-physical systems regulating physical processes in the manufacturing shopfloor to the soft real-time analytics applications predicting anomalies in the assembly line. Given the diverse demands of the applications, a single converged network providing different levels of communication guarantees to the applications based on their requirements is desired. Ethernet, on account of its ubiquity and its steadily growing performance along with shrinking costs, has emerged as a popular choice as a converged network. However, Ethernet networks, primarily designed for best-effort communication services, cannot provide strict guarantees like bounded end-to-end latency and jitter for real-time traffic without additional enhancements. Two major standardization bodies, viz., the IEEE Time-sensitive Networking (TSN) Task Group (TG) and the IETF Deterministic Networking (DetNets) Working Group are striving towards equipping Ethernet networks with mechanisms that would enable it to support different classes of real-time traffic. In this thesis, we focus on handling the time-triggered traffic (primarily periodic in nature) stemming from the hard real-time cyber-physical systems embedded in the manufacturing shopfloor over Ethernet networks. The basic approach for this is to schedule the transmissions of the time-triggered data streams appropriately through the network and ensure that the allocated schedules are adhered with. This approach leverages the possibility to precisely synchronize the clocks of the network participants, i.e., end systems and switches, using time synchronization protocols like the IEEE 1588 Precision Time Protocol (PTP). Based on the capabilities of the network participants, the responsibility of enforcing these schedules can be distributed. An important point to note is that the network utilization with respect to the time-triggered data streams depends on the computed schedules. Furthermore, the routing of the time-triggered data streams also influences the computed transmission schedules, and thus, affects the network utilization. The question however remains as to how to compute transmission schedules for time-triggered data streams along with their routes so that an optimal network utilization can be achieved. We explore, in this thesis, the scheduling and routing problems with respect to the time-triggered data streams in Ethernet networks. The recently published IEEE 802.1Qbv standard from the TSN-TG provides programmable gating mechanisms for the switches enabling them to schedule transmissions. Meanwhile, the extensions specified in the IEEE 802.1Qca standard or the primitives provided by OpenFlow, the popular southbound software-defined networking (SDN) protocol, can be used for gaining an explicit control over the routing of the data streams. Using these mechanisms, the responsibility of enforcing transmission schedules can be taken over by the end systems as well as the switches in the network. Alternatively, the scheduling can be enforced only by the end systems or only by the switches. Furthermore, routing alone can also be used to isolate time-triggered data streams, and thus, bound the latency and jitter experienced by the data streams in absence of synchronized clocks in the network. For each of the aforementioned cases, we formulate the scheduling and routing problem using Integer Linear Programming (ILP) for static as well as dynamic scenarios. The static scenario deals with the computation of schedules and routes for time-triggered data streams with a priori knowledge of their specifications. Here, we focus on computing schedules and routes that are optimal with respect to the network utilization. Given that the scheduling problems in the static setting have a high time-complexity, we also present efficient heuristics to approximate the optimal solution. With the dynamic scheduling problem, we address the modifications to the computed transmission schedules for adding further or removing already scheduled time-triggered data streams. Here, the focus lies on reducing the runtime of the scheduling and routing algorithms, and thus, have lower set-up times for adding new data streams into the network
    corecore