13 research outputs found

    Performance Benchmarking Physical and Virtual Linux Environments

    Get PDF
    Virtualisation is a method of partitioning one physical computer into multiple “virtual” computers, giving each the appearance and capabilities of running on its own dedicated hardware. Each virtual system functions as a full-fledged computer and can be independently shutdown and restarted. Xen is a form of paravirtualisation developed by the University of Cambridge Computer Laboratory and is available under both a free and commercial license. Performance results comparing Xen to native Linux as well as to other virtualisation tools such as VMWare and User Mode Linux (UML) were published in the paper "Xen and the Art of Virtualization" at the Symposium on Operating Systems Principles in October 2003 by (Barham et al, 2003). (Clark et al, 2004) performed a similar study and produced similar results. In this thesis, a similar performance analysis of Xen is undertaken and also extended to include the performance analysis of OpenVZ, an alternative open source virtualisation technology. This study made explicit use of open-source software and commodity hardware

    Security Services on an Optimized Thin Hypervisor for Embedded Systems

    Get PDF
    Virtualization has been used in computer servers for a long time as a means to improve utilization, isolation and management. In recent years, embedded devices have become more powerful, increasingly connected and able to run applications on open source commodity operating systems. It only seems natural to apply these virtualization techniques on embedded systems, but with another objective. In computer servers, the main goal was to share the powerful computers with multiple guests to maximize utilization. In embedded systems the needs are different. Instead of utilization, virtualization can be used to support and increase security by providing isolation and multiple secure execution environments for its guests. This thesis presents the design and implementation of a security application, and demonstrates how a thin software virtualization layer developed by SICS can be used to increase the security for a single FreeRTOS guest on an ARM platform. In addition to this, the thin hypervisor was also analyzed for improvements in respect to footprint and overall performance. The selected improvements were then applied and verified with profiling tools and benchmark tests. Our results show that a thin hypervisor can be a very flexible and efficient software solution to provide a secure and isolated execution environment for security critical applications. The applied optimizations reduced the footprint of the hypervisor by over 52%, while keeping the performance overhead at a manageable level

    Sicheres Cloud Computing in der Praxis: Identifikation relevanter Kriterien zur Evaluierung der Praxistauglichkeit von Technologieansätzen im Cloud Computing Umfeld mit dem Fokus auf Datenschutz und Datensicherheit

    Get PDF
    In dieser Dissertation werden verschiedene Anforderungen an sicheres Cloud Computing untersucht. Insbesondere geht es dabei um die Analyse bestehender Forschungs- und Lösungsansätze zum Schutz von Daten und Prozessen in Cloud-Umgebungen und um die Bewertung ihrer Praxistauglichkeit. Die Basis für die Vergleichbarkeit stellen spezifizierte Kriterien dar, nach denen die untersuchten Technologien bewertet werden. Hauptziel dieser Arbeit ist zu zeigen, auf welche Weise technische Forschungsansätze verglichen werden können, um auf dieser Grundlage eine Bewertung ihrer Eignung in der Praxis zu ermöglichen. Hierzu werden zunächst relevante Teilbereiche der Cloud Computing Sicherheit aufgezeigt, deren Lösungsstrategien im Kontext der Arbeit diskutiert und State-of-the-Art Methoden evaluiert. Die Aussage zur Praxistauglichkeit ergibt sich dabei aus dem Verhältnis des potenziellen Nutzens zu den damit verbundene erwartenden Kosten. Der potenzielle Nutzen ist dabei als Zusammenführung der gebotenen Leistungsfähigkeit, Sicherheit und Funktionalität der untersuchten Technologie definiert. Zur objektiven Bewertung setzten sich diese drei Größen aus spezifizierten Kriterien zusammen, deren Informationen direkt aus den untersuchten Forschungsarbeiten stammen. Die zu erwartenden Kosten ergeben sich aus Kostenschlüsseln für Technologie, Betrieb und Entwicklung. In dieser Arbeit sollen die zugleich spezifizierten Evaluierungskriterien sowie die Konstellation der obig eingeführten Begriffe ausführlich erläutert und bewertet werden. Für die bessere Abschätzung der Eignung in der Praxis wird in der Arbeit eine angepasste SWOT-Analyse für die identifizierten relevanten Teilbereiche durchgeführt. Neben der Definition der Praktikabilitätsaussage, stellt dies die zweite Innovation dieser Arbeit dar. Das konkrete Ziel dieser Analyse ist es, die Vergleichbarkeit zwischen den Teilbereichen zu erhöhen und so die Strategieplanung zur Entwicklung sicherer Cloud Computing Lösungen zu verbessern

    Improving energy efficiency of virtualized datacenters

    Get PDF
    Nowadays, many organizations choose to increasingly implement the cloud computing approach. More specifically, as customers, these organizations are outsourcing the management of their physical infrastructure to data centers (or cloud computing platforms). Energy consumption is a primary concern for datacenter (DC) management. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US DCs alone will spend about $13 billion on energy bills. Generally, the datacenter servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. The latter allows a physical server to execute multiple virtual servers (called virtual machines) in an isolated way. With virtualization, the cloud provider can pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with sets of longterm unused resources (called 'holes'). My first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. This solution is effective only for elastic applications, i.e. applications that can be executed and reconfigured over an arbitrary number of VMs. However the datacenter resource fragmentation stems from a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. My second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. Thereby, the two resources can vary independently, depending on their demand. My third and my forth contribution show a practical system which exploit the second contribution. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. My third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the booked memory. My fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, my system allows the VM to allocate memory on a remote physical server

    Estimating memory locality for virtual machines on NUMA systems

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 59-61).The multicore revolution sparked another, similar movement towards scalable memory architectures. With most machines nowadays exhibiting non-uniform memory access (NUMA) properties, software and operating systems have seen the necessity to optimize their memory management to take full advantage of such architectures. Type 1 (native) hypervisors, in particular, are required to extract maximum performance from the underlying hardware, as they often run dozens of virtual machines (VMs) on a single system and provide clients with performance guarantees that must be met. While VM memory demand is often satisfied by CPU caches, memory-intensive workloads may induce a higher rate of last-level cache misses, requiring more accesses to RAM. On today's typical NUMA systems, accessing local RAM is approximately 50% faster than remote RAM. We discovered that current-generation processors from major manufacturers do not provide inexpensive ways to characterize the memory locality achieved by VMs and their constituents. Instead, we present in this thesis a series of techniques based on statistical sampling of memory that produce powerful estimates for NUMA locality and related metrics. Our estimates offer tremendous insight on inefficient placement of VMs and memory, and can be a solid basis for algorithms aiming at dynamic reorganization for improvements in locality, as well as NUMA-aware CPU scheduling algorithms.by Alexandre Milouchev.M. Eng

    TimeKeeper: a lightweight and scalable virtual time system for the Linux Kernel

    Get PDF
    The ability to embed certain processes in virtual time is very useful to the Linux Kernel. Each process may be directed to advance in virtual time either more quickly or more slowly than actual (real) time. This allows interactions between processes and physical devices to be artificially scaled. For example, a network may appear to be ten times faster within a process than it actually is. Virtual time is also useful in the context of mixing emulation with a network simulator, in order to reduce the overall workload on the simulator. If virtual time is progressing more slowly than real time, the simulator will have additional time to process events. This allows for more precise packet timing, thus improving the fidelity of the experiment. The purpose of this thesis is to present TimeKeeper, a lightweight and scalable virtual time system for the Linux Kernel. TimeKeeper consists of a simple patch to the 3.10.9 Linux Kernel and a Linux Kernel Module. With TimeKeeper, a user is able to assign a specific time dilation factor to any process, as well as freeze/unfreeze a process (where virtual time will not advance when a process is frozen). In addition, TimeKeeper supports synchronized (in virtual time) emulation, by grouping processes together into an experiment where the virtual times of the processes remain synchronized, even when their virtual time advances at different rates. This thesis explores the motivation for TimeKeeper, as well as potential use cases. TimeKeeper’s API and design goals are discussed. With the various design goals in mind, this paper explores the implementation of Timekeeper, including specific file modifications to the Linux Kernel in conjunction with the underlying algorithms. Additionally, various experiments conducted with TimeKeeper are reviewed. These experiments include synchronization efficiency, TimeKeeper overhead, and scalability. Finally, integration and utilization of TimeKeeper with different network simulators is examined. TimeKeeper allows the virtual times of multiple processes to be tightly synchronized, plus scaling to a very large number of processes. This creates the ability to execute far more complex simulations than previously possible utilizing the same hardware
    corecore