1,418 research outputs found

    ๊ฐ€์ƒํ™” ํ™˜๊ฒฝ์„ ์œ„ํ•œ ์›๊ฒฉ ๋ฉ”๋ชจ๋ฆฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021.8. Bernhard Egger.ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์€ ๊ฑฐ๋Œ€ํ•œ ์—ฐ์‚ฐ ์ž์›์„ ์ƒ์‹œ ๊ฐ€๋™ํ•  ํ•„์š” ์—†๊ณ  ์›ํ•˜๋Š” ์ˆœ๊ฐ„ ์›ํ•˜๋Š” ์–‘์˜ ๋Œ€ํ•œ ์—ฐ์‚ฐ ๋น„์šฉ๋งŒ์„ ์ง€๋ถˆํ•˜๋ฉด ๋˜๊ธฐ ๋•Œ๋ฌธ์—, ์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ ๋ฐ ๋น…๋ฐ์ดํ„ฐ ์—ฐ์‚ฐ์˜ ์œ ํ–‰์œผ๋กœ ์ธํ•ด ๊ทธ ์ˆ˜์š”๊ฐ€ ํฌ๊ฒŒ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ํด๋ผ์šฐ๋“œ ์ปดํ“จํŒ…์˜ ๋„์ž…์œผ๋กœ์ธํ•ด ๊ณ ๊ฐ์€ ์„œ๋ฒ„ ์œ ์ง€์— ๋Œ€ํ•œ ๋น„์šฉ์„ ํฌ๊ฒŒ ์ ˆ๊ฐํ•  ์ˆ˜ ์žˆ๊ณ  ์„œ๋น„์Šค ์ œ๊ณต์ž๋Š” ์—ฐ์‚ฐ ์ž์›์˜ ์ด์šฉ ํšจ์œจ์„ ๊ทน๋Œ€ํ™” ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ์‹œ๋‚˜๋ฆฌ์˜ค์—์„œ ๋ฐ์ดํ„ฐ์„ผํ„ฐ ์ž…์žฅ์—์„œ๋Š” ์—ฐ์‚ฐ ์ž์› ํ™œ์šฉ ํšจ์œจ์„ ๊ฐœ์„ ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•œ ๋ชฉํ‘œ๊ฐ€ ๋œ๋‹ค. ํŠนํžˆ ์ตœ๊ทผ ํญ์ฆํ•˜๊ณ  ์žˆ๋Š” ๋ฐ์ดํ„ฐ ์„ผํ„ฐ์˜ ๊ทœ๋ชจ๋ฅผ ๊ณ ๋ คํ•˜๋ฉด ์ž‘์€ ํšจ์œจ ๊ฐœ์„ ์œผ๋กœ๋„ ๋ง‰๋Œ€ํ•œ ๊ฒฝ์ œ์  ๊ฐ€์น˜๋ฅผ ์ฐฝ์ถœ ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ผํ„ฐ์˜ ํšจ์œจ์€ ์œ„์น˜ ์„ ์ •, ๊ตฌ์กฐ ์„ค๊ณ„, ๋ƒ‰๊ฐ ์‹œ์Šคํ…œ, ํ•˜๋“œ์›จ์–ด ๊ตฌ์„ฑ ๋“ฑ๋“ฑ ๋‹ค์–‘ํ•œ ์š”์†Œ๋“ค์— ์˜ํ–ฅ์„ ๋ฐ›์ง€๋งŒ, ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ํŠนํžˆ ์—ฐ์‚ฐ ๋ฐ ๋ฉ”๋ชจ๋ฆฌ ์ž์›์„ ๊ด€๋ฆฌํ•˜๋Š” ์†Œํ”„ํŠธ์›จ์–ด ์„ค๊ณ„ ๋ฐ ๊ตฌํ˜„์„ ๋‹ค๋ฃฌ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋ฐ์ดํ„ฐ ์„ผํ„ฐ ํšจ์œจ ๊ฐœ์„ ์„ ํš๊ธฐ์ ์œผ๋กœ ๊ฐœ์„ ํ•˜๋Š” ๋‘๊ฐ€์ง€ ์†Œํ”„ํŠธ์›จ์–ด ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ ์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ ์งธ๋กœ ๊ฐ€์ƒํ™” ํ™˜๊ฒฝ์„ ์œ„ํ•œ ์†Œํ”„ํŠธ์›จ์–ด ๊ธฐ๋ฐ˜ ๋ฉ”๋ชจ๋ฆฌ ๋ถ„๋ฆฌ ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•œ๋‹ค. ์ตœ๊ทผ ๊ณ ์† ๋„คํŠธ์›Œํฌ์˜ ๋ฐœ์ „์œผ๋กœ ์ธํ•ด ์›๊ฒฉ ๋ฉ”๋ชจ๋ฆฌ ์ ‘๊ทผ ๋น„์šฉ์ด ํš๊ธฐ์ ์œผ๋กœ ์ค„์–ด ๋“ค์—ˆ๊ณ , ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ๊ณ ์„ฑ๋Šฅ ๋„คํŠธ์›Œํ‚น ํ•˜๋“œ์›จ์–ด๋ฅผ ์ด์šฉํ•˜์—ฌ ์›๊ฒฉ ๋ฉ”๋ชจ๋ฆฌ ์œ„์—์„œ ์‹คํ–‰๋˜๋Š” ๊ฐ€์ƒ ๋จธ์‹ ์˜ ํฐ ์„ฑ๋Šฅ ์ €ํ•˜ ์—†์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ธ๋‹ค. ์ œ์•ˆ๋œ ๊ธฐ์ˆ ์„ QEMU/KVM ๊ฐ€์ƒ๋จธ์‹  ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•ด ํ‰๊ฐ€ํ•œ ๊ฒฐ๊ณผ, ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ๊ธฐ๋ฒ•์€ ๊ธฐ์กด ์‹œ์Šคํ…œ ๋Œ€๋น„ ์›๊ฒฉ ํŽ˜์ด์ง•์— ๋Œ€ํ•œ ๊ผฌ๋ฆฌ ์ง€์—ฐ์‹œ๊ฐ„์„ 98.2% ๊ฐœ์„ ํ•จ์„ ๋ณด์ธ๋‹ค. ๋˜ํ•œ ๋ž™ ๊ทœ๋ชจ์˜ ์ž‘์—…์ฒ˜๋ฆฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•œ ์‹คํ—˜์—์„œ, ์ œ์•ˆ๋œ ์‹œ์Šคํ…œ์€ ์ „์ฒด ์ž‘์—… ์ฒ˜๋ฆฌ ์‹œ๊ฐ„์„ ๊ธฐ์กด ์‹œ์Šคํ…œ ๋Œ€๋น„ 40.9% ์ค„์ผ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ธ๋‹ค. ๋‘ ๋ฒˆ์งธ๋กœ ์›๊ฒฉ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ด์šฉํ•˜๋Š” ์ฆ‰๊ฐ์ ์ธ ๊ฐ€์ƒ๋จธ์‹  ์ด์ฃผ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•˜๋‹ค. ๊ฐ€์ƒํ™” ํ™˜๊ฒฝ์˜ ์›๊ฒฉ ๋ฉ”๋ชจ๋ฆฌ ํ™œ์šฉ์— ๋Œ€ํ•œ ํ™•์žฅ์€ ๊ทธ๊ฒƒ๋งŒ์œผ๋กœ ์ž์› ์ด์šฉ๋ฅ  ํ–ฅ์ƒ์— ๋Œ€ํ•ด ํฐ ๊ธฐ์—ฌ๋ฅผ ํ•˜์ง€๋งŒ, ์—ฌ์ „ํžˆ ํ•œ ์„œ๋ฒ„์—์„œ ์—ฌ๋Ÿฌ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์ด ๊ฒฝ์Ÿ์ ์œผ๋กœ ์ž์›์„ ์ด์šฉํ•˜๋Š” ๊ฒฝ์šฐ ์„ฑ๋Šฅ์ด ํฌ๊ฒŒ ์ €ํ•˜ ๋  ์ˆ˜ ์žˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•˜๋Š” ์ฆ‰๊ฐ์ ์ธ ๊ฐ€์ƒ๋จธ์‹  ์ด์ฃผ ๊ธฐ๋ฒ•์€ ์›๊ฒฉ ๋ฉ”๋ชจ๋ฆฌ ์ƒ์—์„œ ์•„์ฃผ ์ž‘์€ ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ์˜ ์ „์†ก๋งŒ์œผ๋กœ ๊ฐ€์ƒ๋จธ์‹ ์˜ ์ด์ฃผ๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋ฉฐ, ๋ฉ”๋ชจ๋ฆฌ ์ƒ์— ํ‚ค์™€ ๊ฐ’์„ ์ €์žฅํ•˜๋Š” ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค ๋ฒค์น˜๋งˆํฌ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฐ€์ƒ๋จธ์‹ ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ‰๊ฐ€์—์„œ ๊ธฐ์กด ๊ธฐ๋ฒ•๋Œ€๋น„ ์‹ค์งˆ์ ์ธ ์„œ๋น„์Šค ์ค‘๋‹จ์‹œ๊ฐ„์„ ์ตœ๋Œ€ 92.6% ๊ฐœ์„ ํ•จ์„ ๋ณด์ธ๋‹ค.The raising importance of big data and artificial intelligence (AI) has led to an unprecedented shift in moving local computation into the cloud. One of the key drivers behind this transformation was the exploding cost of owning and maintaining large computing systems powerful enough to process these new workloads. Customers experience a reduced cost by renting only the required resources and only when needed, while data center operators benefit from efficiency at scale. A key factor in operating a profitable data center is a high overall utilization of its resources. Due to the scale of modern data centers, small improvements in efficiency translate to significant savings in the total cost of ownership (TCO). There are many important elements that constitute an efficient data center such as its location, architecture, cooling system, or the employed hardware. In this thesis, we focus on software-related aspects, namely the utilization of computational and memory resources. Reports from data centers operated by Alibaba and Google show that the overall resource utilization has stagnated at a level of around 50 to 60 percent over the past decade. This low average utilization is mostly attributable to peak demand-driven resource allocation despite the high variability of modern workloads in their resource usage. In other words, data centers today lack an efficient way to put idle resources that are reserved but not used to work. In this dissertation we present RackMem, a software-based solution to address the problem of low resource utilization through two main contributions. First, we introduce a disaggregated memory system tailored for virtual environments. We observe that virtual machines can use remote memory without noticeable performance degradation under moderate memory pressure on modern networking infrastructure. We implement a specialized remote paging system for QEMU/KVM that reduces the remote paging tail-latency by 98.2% in comparison to the state of the art. A job processing simulation at rack-scale shows that the total makespan can be reduced by 40.9% under our memory system. While seamless disaggregated memory helps to balance memory usage across nodes, individual nodes can still suffer overloaded resources if co-located workloads exhibit high resource usage at the same time. In a second contribution, we present a novel live migration technique for machines running on top of our remote paging system. Under this instant live migration technique, entire virtual machines can be migrated in as little as 100 milliseconds. An evaluation with in-memory key-value database workloads shows that the presented migration technique improves the state of the art by a wide margin in all key performance metrics. The presented software-based solutions lay the technical foundations that allow data center operators to significantly improve the utilization of their computational and memory resources. As future work, we propose new job schedulers and load balancers to make full use of these new technical foundations.Chapter 1. Introduction 1 1.1 Contributions of the Dissertation 3 Chapter 2. Background 5 2.1 Resource Disaggregation 5 2.2 Transparent Remote Paging 7 2.3 Remote Direct Memory Access (RDMA) 9 2.4 Live Migration of Virtual Machines 10 Chapter 3. RackMem Overview 13 3.1 RackMem Virtual Memory 13 3.2 RackMem Distributed Virtual Storage 14 3.3 RackMem Networking 15 3.4 Instant VM Live Migration 16 Chapter 4. Virtual Memory 17 4.1 Design Considerations for Achieving Low-latency 19 4.2 Pagefault handling 20 4.2.1 Fast-path and slow-path in the pagefault handler 21 4.2.2 State transition of RackVM page 23 4.3 Latency Hiding Techniques 25 4.4 Implementation 26 4.4.1 RackMem Virtual Memory Module 27 4.4.2 Dynamic Rebalancing of Local Memory 29 4.4.3 RackVM for Virtual Machines 29 4.4.4 Running Unmodified Applications 30 Chapter 5. RackMem Distributed Virtual Storage 31 5.1 The distributed Storage Abstraction 32 5.2 Memory Management 33 5.2.1 Remote memory allocation 33 5.2.2 Remote memory reclamation 33 5.3 Fault Tolerance 34 5.3.1 Fault-tolerance and Write-duplication 34 5.4 Multiple Storage Support in RackMem 36 5.5 Implementation 38 5.5.1 The Remote Memory Backend 38 5.5.2 Linux Demand Paging on RackDVS 39 Chapter 6. Networking 40 6.1 Design of RackNet 40 6.2 Implementation 41 6.2.1 RPC message layout 41 6.2.2 RackNet RPC Implementation 42 Chapter 7. Instant VM Live Migration 44 7.1 Motivation 45 7.1.1 The need for a tailored live migration technique 45 7.1.2 Software Bottlenecks 46 7.1.3 Utilizing workload variability 46 7.2 Design of Instant 47 7.2.1 Instant Region Migration 47 7.3 Implementation 48 7.3.1 Extension of RackVM for Instant 49 7.3.2 Instant region migration 49 7.3.3 Pre-fetch optimizations 51 7.3.4 Downtime optimizations 51 7.3.5 QEMU modification for Instant 52 Chapter 8. Evaluation - RackMem 53 8.1 Execution Environment 54 8.2 Pagefault Handler Latency 56 8.3 Single Application Performance 57 8.3.1 Batch-oriented Applications 58 8.3.2 Internal Pagesize and Performance 59 8.3.3 Write-duplication overhead 60 8.3.4 RackDVS slab size and performance 62 8.3.5 Latency-oriented Applications 63 8.3.6 Network Bandwidth Analysis 64 8.3.7 Dynamic Local Memory Partitioning 66 8.3.8 Rack-scale Job Processing Simulation 67 Chapter 9. Evaluation - Instant VM Live Migration 69 9.1 Experimental setup 69 9.2 Target Applications 70 9.3 Comparison targets 70 9.4 Database and client setups 71 9.5 Memory disaggregation scenarios 71 9.6.1 Time-to-responsiveness 71 9.6.2 Effective Downtime 73 9.6.3 Effect of Instant optimizations 75 Chapter 10. Conclusion 77 10.1 Future Directions 78 ์š”์•ฝ 89๋ฐ•

    Addressing the Challenges in Federating Edge Resources

    Full text link
    This book chapter considers how Edge deployments can be brought to bear in a global context by federating them across multiple geographic regions to create a global Edge-based fabric that decentralizes data center computation. This is currently impractical, not only because of technical challenges, but is also shrouded by social, legal and geopolitical issues. In this chapter, we discuss two key challenges - networking and management in federating Edge deployments. Additionally, we consider resource and modeling challenges that will need to be addressed for a federated Edge.Comment: Book Chapter accepted to the Fog and Edge Computing: Principles and Paradigms; Editors Buyya, Sriram

    Understanding Security Threats in Cloud

    Get PDF
    As cloud computing has become a trend in the computing world, understanding its security concerns becomes essential for improving service quality and expanding business scale. This dissertation studies the security issues in a public cloud from three aspects. First, we investigate a new threat called power attack in the cloud. Second, we perform a systematical measurement on the public cloud to understand how cloud vendors react to existing security threats. Finally, we propose a novel technique to perform data reduction on audit data to improve system capacity, and hence helping to enhance security in cloud. In the power attack, we exploit various attack vectors in platform as a service (PaaS), infrastructure as a service (IaaS), and software as a service (SaaS) cloud environments. to demonstrate the feasibility of launching a power attack, we conduct series of testbed based experiments and data-center-level simulations. Moreover, we give a detailed analysis on how different power management methods could affect a power attack and how to mitigate such an attack. Our experimental results and analysis show that power attacks will pose a serious threat to modern data centers and should be taken into account while deploying new high-density servers and power management techniques. In the measurement study, we mainly investigate how cloud vendors have reacted to the co-residence threat inside the cloud, in terms of Virtual Machine (VM) placement, network management, and Virtual Private Cloud (VPC). Specifically, through intensive measurement probing, we first profile the dynamic environment of cloud instances inside the cloud. Then using real experiments, we quantify the impacts of VM placement and network management upon co-residence, respectively. Moreover, we explore VPC, which is a defensive service of Amazon EC2 for security enhancement, from the routing perspective. Advanced Persistent Threat (APT) is a serious cyber-threat, cloud vendors are seeking solutions to ``connect the suspicious dots\u27\u27 across multiple activities. This requires ubiquitous system auditing for long period of time, which in turn causes overwhelmingly large amount of system audit logs. We propose a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high quality forensics analysis. In particular, we first propose an aggregation algorithm that preserves the event dependency in data reduction to ensure high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. We conduct a comprehensive evaluation on real world auditing systems using more than one-month log traces to validate the efficacy of our approach

    Advances in Dynamic Virtualized Cloud Management

    Get PDF
    Cloud computing continues to gain in popularity, with more and more applications being deployed into public and private clouds. Deploying an application in the cloud allows application owners to provision computing resources on-demand, and scale quickly to meet demand. An Infrastructure as a Service (IaaS) cloud provides low-level resources, in the form of virtual machines (VMs), to clients on a pay-per-use basis. The cloud provider (owner) can reduce costs by lowering power consumption. As a typical server can consume 50% or more of its peak power consumption when idle, this can be accomplished by consolidating client VMs onto as few hosts (servers) as possible. This, however, can lead to resource contention, and degraded VM performance. As such, VM placements must be dynamically adapted to meet changing workload demands. We refer to this process as dynamic management. Clients should also take advantage of the cloud environment by scaling their applications up and down (adding and removing VMs) to match current workload demands. This thesis proposes a number of contributions to the field of dynamic cloud management. First, we propose a method of dynamically switching between management strategies at run-time in order to achieve more than one management goal. In order to increase the scalability of dynamic management algorithms, we introduce a distributed version of our management algorithm. We then consider deploying applications which consist of multiple VMs, and automatically scale their deployment to match their workload. We present an integrated management algorithm which handles both dynamic management and application scaling. When dealing with multi-VM applications, the placement of communicating VMs within the data centre topology should be taken into account. To address this consideration, we propose a topology-aware version of our dynamic management algorithm. Finally, we describe a simulation tool, DCSim, which we have developed to help evaluate dynamic management algorithms and techniques

    Holistic Virtual Machine Scheduling in Cloud Datacenters towards Minimizing Total Energy

    Get PDF
    Energy consumed by Cloud datacenters has dramatically increased, driven by rapid uptake of applications and services globally provisioned through virtualization. By applying energy-aware virtual machine scheduling, Cloud providers are able to achieve enhanced energy efficiency and reduced operation cost. Energy consumption of datacenters consists of computing energy and cooling energy. However, due to the complexity of energy and thermal modeling of realistic Cloud datacenter operation, traditional approaches are unable to provide a comprehensive in-depth solution for virtual machine scheduling which encompasses both computing and cooling energy. This paper addresses this challenge by presenting an elaborate thermal model that analyzes the temperature distribution of airflow and server CPU. We propose GRANITE โ€“ a holistic virtual machine scheduling algorithm capable of minimizing total datacenter energy consumption. The algorithm is evaluated against other existing workload scheduling algorithms MaxUtil, TASA, IQR and Random using real Cloud workload characteristics extracted from Google datacenter tracelog. Results demonstrate that GRANITE consumes 4.3% - 43.6% less total energy in comparison to the state-of-the-art, and reduces the probability of critical temperature violation by 99.2% with 0.17% SLA violation rate as the performance penalty

    Reconfigurable Optically Interconnected Systems

    Get PDF
    With the immense growth of data consumption in today's data centers and high-performance computing systems driven by the constant influx of new applications, the network infrastructure supporting this demand is under increasing pressure to enable higher bandwidth, latency, and flexibility requirements. Optical interconnects, able to support high bandwidth wavelength division multiplexed signals with extreme energy efficiency, have become the basis for long-haul and metro-scale networks around the world, while photonic components are being rapidly integrated within rack and chip-scale systems. However, optical and photonic interconnects are not a direct replacement for electronic-based components. Rather, the integration of optical interconnects with electronic peripherals allows for unique functionalities that can improve the capacity, compute performance and flexibility of current state-of-the-art computing systems. This requires physical layer methodologies for their integration with electronic components, as well as system level control planes that incorporates the optical layer characteristics. This thesis explores various network architectures and the associated control plane, hardware infrastructure, and other supporting software modules needed to integrate silicon photonics and MEMS based optical switching into conventional datacom network systems ranging from intra-data center and high-performance computing systems to the metro-scale layer networks between data centers. In each of these systems, we demonstrate dynamic bandwidth steering and compute resource allocation capabilities to enable significant performance improvements. The key accomplishments of this thesis are as follows. In Part 1, we present high-performance computing network architectures that integrate silicon photonic switches for optical bandwidth steering, enabling multiple reconfigurable topologies that results in significant system performance improvements. As high-performance systems rely on increased parallelism by scaling up to greater numbers of processor nodes, communication between these nodes grows rapidly and the interconnection network becomes a bottleneck to the overall performance of the system. It has been observed that many scientific applications operating on high-performance computing systems cause highly skewed traffic over the network, congesting only a small percentage of the total available links while other links are underutilized. This mismatch of the traffic and the bandwidth allocation of the physical layer network presents the opportunity to optimize the bandwidth resource utilization of the system by using silicon photonic switches to perform bandwidth steering. This allows the individual processors to perform at their maximum compute potential and thereby improving the overall system performance. We show various testbeds that integrates both microring resonator and Mach-Zehnder based silicon photonic switches within Dragonfly and Fat-Tree topology networks built with conventional equipment, and demonstrate 30-60% reduction in execution time of real high-performance benchmark applications. Part 2 presents a flexible network architecture and control plane that enables autonomous bandwidth steering and IT resource provisioning capabilities between metro-scale geographically distributed data centers. It uses a software-defined control plane to autonomously provision both network and IT resources to support different quality of service requirements and optimizes resource utilization under dynamically changing load variations. By actively monitoring both the bandwidth utilization of the network and CPU or memory resources of the end hosts, the control plane autonomously provisions background or dynamic connections with different levels of quality of service using optical MEMS switching, as well as initializing live migrations of virtual machines to consolidate or distribute workload. Together these functionalities provide flexibility and maximize efficiency in processing and transferring data, and enables energy and cost savings by scaling down the system when resources are not needed. An experimental testbed of three data center nodes was built to demonstrate the feasibility of these capabilities. Part 3 presents Lightbridge, a communications platform specifically designed to provide a more seamless integration between processor nodes and an optically switched network. It addresses some of the crucial issues faced by the works presented in the previous chapters related to optical switching. When optical switches perform switching operations, they change the physical topology of the network, and they lack the capability to buffer packets, resulting in certain optical circuits being unavailable. This prompts the question of whether it is safe to transmit packets by end hosts at any given time. Lightbridge was developed to coordinate switching and routing of optical circuits across the network, by having the processors gain information about the current state of the optical network before transmitting packets, and being able to buffer packets when the optical circuit is not available. This part describes details of Lightbridge which is constituted by a loadable Linux kernel module along with other supporting modifications to the Linux kernel in order to achieve the necessary functionalities

    Planning Live-Migrations to Prepare Servers for Maintenance

    Get PDF
    International audienceIn a virtualized data center, server maintenance is a common but still critical operation. A prerequisite is indeed to relocate elsewhere the Virtual Machines (VMs) running on the production servers to prepare them for the maintenance. When the maintenance focuses several servers, this may lead to a costly relocation of several VMs so the migration plan must be chose wisely. This however implies to master numerous human, technical, and economical aspects that play a role in the design of a quality migration plan. In this paper, we study migration plans that can be decided by an operator to prepare for an hardware upgrade or a server refresh on multiple servers. We exhibit performance bottleneck and pitfalls that reduce the plan efficiency. We then discuss and validate possible improvements deduced from the knowledge of the environment peculiarities
    • โ€ฆ
    corecore