792 research outputs found

    Re-designing Dynamic Content Delivery in the Light of a Virtualized Infrastructure

    Get PDF
    We explore the opportunities and design options enabled by novel SDN and NFV technologies, by re-designing a dynamic Content Delivery Network (CDN) service. Our system, named MOSTO, provides performance levels comparable to that of a regular CDN, but does not require the deployment of a large distributed infrastructure. In the process of designing the system, we identify relevant functions that could be integrated in the future Internet infrastructure. Such functions greatly simplify the design and effectiveness of services such as MOSTO. We demonstrate our system using a mixture of simulation, emulation, testbed experiments and by realizing a proof-of-concept deployment in a planet-wide commercial cloud system.Comment: Extended version of the paper accepted for publication in JSAC special issue on Emerging Technologies in Software-Driven Communication - November 201

    OSHI - Open Source Hybrid IP/SDN networking (and its emulation on Mininet and on distributed SDN testbeds)

    Full text link
    The introduction of SDN in IP backbones requires the coexistence of regular IP forwarding and SDN based forwarding. The former is typically applied to best effort Internet traffic, the latter can be used for different types of advanced services (VPNs, Virtual Leased Lines, Traffic Engineering...). In this paper we first introduce the architecture and the services of an "hybrid" IP/SDN networking scenario. Then we describe the design and implementation of an Open Source Hybrid IP/SDN (OSHI) node. It combines Quagga for OSPF routing and Open vSwitch for OpenFlow based switching on Linux. The availability of tools for experimental validation and performance evaluation of SDN solutions is fundamental for the evolution of SDN. We provide a set of open source tools that allow to facilitate the design of hybrid IP/SDN experimental networks, their deployment on Mininet or on distributed SDN research testbeds and their test. Finally, using the provided tools, we evaluate key performance aspects of the proposed solutions. The OSHI development and test environment is available in a VirtualBox VM image that can be downloaded.Comment: Final version (Last updated August, 2014

    Linux-based virtualization for HPC clusters

    Get PDF
    International audienceThere has been an increasing interest in virtualization in the HPC community, as it would allow to easily and efficiently share computing resources between users, and provide a simple solution for checkpointing. However, virtualization raises a number of interesting questions, on performance and overhead, of course, but also on the fairness of the sharing. In this work, we evaluate the suitability of KVM virtual machines in this context, by comparing them with solutions based on Xen. We also outline areas where improvements are needed, to provide directions for future works

    Cloud engineering is search based software engineering too

    Get PDF
    Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation to solve precisely these kinds of software engineering challenges. Interest in both cloud computing and SBSE has grown rapidly in the past five years, yet there has been little work on SBSE as a means of addressing cloud computing challenges. Like many computationally demanding activities, SBSE has the potential to benefit from the cloud; ‘SBSE in the cloud’. However, this paper focuses, instead, of the ways in which SBSE can benefit cloud computing. It thus develops the theme of ‘SBSE for the cloud’, formulating cloud computing challenges in ways that can be addressed using SBSE

    Performance analysis of HPC applications in the cloud

    Get PDF
    [Abstract] The scalability of High Performance Computing (HPC) applications depends heavily on the efficient support of network communications in virtualized environments. However, Infrastructure as a Service (IaaS) providers are more focused on deploying systems with higher computational power interconnected via high-speed networks rather than improving the scalability of the communication middleware. This paper analyzes the main performance bottlenecks in HPC application scalability on the Amazon EC2 Cluster Compute platform: (1) evaluating the communication performance on shared memory and a virtualized 10 Gigabit Ethernet network; (2) assessing the scalability of representative HPC codes, the NAS Parallel Benchmarks, using an important number of cores, up to 512; (3) analyzing the new cluster instances (CC2), both in terms of single instance performance, scalability and cost-efficiency of its use; (4) suggesting techniques for reducing the impact of the virtualization overhead in the scalability of communication-intensive HPC codes, such as the direct access of the Virtual Machine to the network and reducing the number of processes per instance; and (5) proposing the combination of message-passing with multithreading as the most scalable and cost-effective option for running HPC applications on the Amazon EC2 Cluster Compute platform.Ministerio de Ciencia e Innovación; TIN2010-16735Ministerio de Economía y Competitividad; AP2010-4348

    Jitsu: Just-in-time summoning of unikernel

    Get PDF
    Network latency is a problem for all cloud services. It can be mitigated by moving computation out of remote datacenters by rapidly instantiating local services near the user. This requires an embedded cloud platform on which to deploy multiple applications securely and quickly. We present Jitsu, a new Xen toolstack that satisfies the demands of secure multi-tenant isolation on resource-constrained embedded ARM devices. It does this by using unikernels: lightweight, compact, single address space, memory-safe virtual machines (VMs) written in a high-level language. Using fast shared memory channels, Jitsu provides a directory service that launches unikernels in response to network traffic and masks boot latency. Our evaluation shows Jitsu to be a power-efficient and responsive platform for hosting cloud services in the edge network while preserving the strong isolation guarantees of a type-1 hypervisor.The research leading to these results received funding from the European Union’s Seventh Framework Programme FP7/2007–2013 under the Trilogy 2 project (grant agreement no. 317756), and the User Centric Networking project, (grant agreement no. 611001), and the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), under contract FA8750-11-C-0249.This is the author accepted manuscript. The final version is available from USENIX via https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/madhavapedd

    CloudMon: a resource-efficient IaaS cloud monitoring system based on networked intrusion detection system virtual appliances

    Get PDF
    The networked intrusion detection system virtual appliance (NIDS-VA), also known as virtualized NIDS, plays an important role in the protection and safeguard of IaaS cloud environments. However, it is nontrivial to guarantee both of the performance of NIDS-VA and the resource efficiency of cloud applications because both are sharing computing resources in the same cloud environment. To overcome this challenge and trade-off, we propose a novel system, named CloudMon, which enables dynamic resource provision and live placement for NIDS-VAs in IaaS cloud environments. CloudMon provides two techniques to maintain high resource efficiency of IaaS cloud environments without degrading the performance of NIDS-VAs and other virtual machines (VMs). The first technique is a virtual machine monitor based resource provision mechanism, which can minimize the resource usage of a NIDS-VA with given performance guarantee. It uses a fuzzy model to characterize the complex relationship between performance and resource demands of a NIDS-VA and develops an online fuzzy controller to adaptively control the resource allocation for NIDS-VAs under varying network traffic. The second one is a global resource scheduling approach for optimizing the resource efficiency of the entire cloud environments. It leverages VM migration to dynamically place NIDS-VAs and VMs. An online VM mapping algorithm is designed to maximize the resource utilization of the entire cloud environment. Our virtual machine monitor based resource provision mechanism has been evaluated by conducting comprehensive experiments based on Xen hypervisor and Snort NIDS in a real cloud environment. The results show that the proposed mechanism can allocate resources for a NIDS-VA on demand while still satisfying its performance requirements. We also verify the effectiveness of our global resource scheduling approach by comparing it with two classic vector packing algorithms, and the results show that our approach improved the resource utilization of cloud environments and reduced the number of in-use NIDS-VAs and physical hosts.The authors gratefully acknowledge the anonymous reviewers for their helpful suggestions and insightful comments to improve the quality of the paper. The work reported in this paper has been partially supported by National Nature Science Foundation of China (No. 61202424, 61272165, 91118008), China 863 program (No. 2011AA01A202), Natural Science Foundation of Jiangsu Province of China (BK20130528) and China 973 Fundamental R&D Program (2011CB302600)

    Optimizing Virtual Machine I/O Performance in Virtualized Cloud by Differenciated-frequency Scheduling and Functionality Offloading

    Get PDF
    Many enterprises are increasingly moving their applications to private cloud environments or public cloud platforms. A key technology driving cloud computing is virtualization which can serve multiple VMs in one physical machine hence providing better management flexibility and significant savings in operational costs. However, one important consequence of virtualized hosts in the cloud is the negative impact it has on the I/O performance of the applications running in the VMs
    • …
    corecore