181 research outputs found

    Exposing Inter-Virtual Machine Networking Traffic to External Applications

    Get PDF
    Virtualization is a powerful and fast growing technology that is widely accepted throughout the computing industry. The Department of Defense has moved its focus to virtualization and looks to take advantage of virtualized hardware, software, and networks. Virtual environments provide many benefits but create both administrative and security challenges. The challenge of monitoring virtual networks is having visibility of inter-virtual machine (VM) traffic that is passed within a single virtual host. This thesis attempts to gain visibility and evaluate performance of inter-VM traffic in a virtual environment. Separate virtual networks are produced using VMWare ESXi and Citrix XenServer platforms. The networks are comprised of three virtual hosts containing a Domain Controller VM, a Dynamic Host Configuration Protocol server VM, two management VMs, and four testing VMs. Configuration of virtual hosts, VMs, and networking components are identical on each network for a consistent comparison. Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic is generated to test each network using custom batch files, Powershell scripts, and Python code. Results show standard virtual networks require additional resources (e.g., local Intrusion Detection System) and more hands-on administration for real-time traffic visibility than a virtual network using a distributed switch. Traffic visibility within a standard network is limited to using a local packet capture program such as pktcap-uw, tcpdump, or windump. However, distributed networks offer advanced options, such as port mirroring and NetFlow, that deliver higher visibility but come at a higher latency for both TCP and UDP inter-VM traffic

    Advancing Operating Systems via Aspect-Oriented Programming

    Get PDF
    Operating system kernels are among the most complex pieces of software in existence to- day. Maintaining the kernel code and developing new functionality is increasingly compli- cated, since the amount of required features has risen significantly, leading to side ef fects that can be introduced inadvertedly by changing a piece of code that belongs to a completely dif ferent context. Software developers try to modularize their code base into separate functional units. Some of the functionality or “concerns” required in a kernel, however, does not fit into the given modularization structure; this code may then be spread over the code base and its implementation tangled with code implementing dif ferent concerns. These so-called “crosscutting concerns” are especially dif ficult to handle since a change in a crosscutting concern implies that all relevant locations spread throughout the code base have to be modified. Aspect-Oriented Software Development (AOSD) is an approach to handle crosscutting concerns by factoring them out into separate modules. The “advice” code contained in these modules is woven into the original code base according to a pointcut description, a set of interaction points (joinpoints) with the code base. To be used in operating systems, AOSD requires tool support for the prevalent procedu- ral programming style as well as support for weaving aspects. Many interactions in kernel code are dynamic, so in order to implement non-static behavior and improve performance, a dynamic weaver that deploys and undeploys aspects at system runtime is required. This thesis presents an extension of the “C” programming language to support AOSD. Based on this, two dynamic weaving toolkits – TOSKANA and TOSKANA-VM – are presented to permit dynamic aspect weaving in the monolithic NetBSD kernel as well as in a virtual- machine and microkernel-based Linux kernel running on top of L4. Based on TOSKANA, applications for this dynamic aspect technology are discussed and evaluated. The thesis closes with a view on an aspect-oriented kernel structure that maintains coherency and handles crosscutting concerns using dynamic aspects while enhancing de- velopment methods through the use of domain-specific programming languages

    Unikraft:Fast, Specialized Unikernels the Easy Way

    Get PDF
    Unikernels are famous for providing excellent performance in terms of boot times, throughput and memory consumption, to name a few metrics. However, they are infamous for making it hard and extremely time consuming to extract such performance, and for needing significant engineering effort in order to port applications to them. We introduce Unikraft, a novel micro-library OS that (1) fully modularizes OS primitives so that it is easy to customize the unikernel and include only relevant components and (2) exposes a set of composable, performance-oriented APIs in order to make it easy for developers to obtain high performance. Our evaluation using off-the-shelf applications such as nginx, SQLite, and Redis shows that running them on Unikraft results in a 1.7x-2.7x performance improvement compared to Linux guests. In addition, Unikraft images for these apps are around 1MB, require less than 10MB of RAM to run, and boot in around 1ms on top of the VMM time (total boot time 3ms-40ms). Unikraft is a Linux Foundation open source project and can be found at www.unikraft.or

    Demonstração de criação de redes virtuais no ùmbito do operador

    Get PDF
    Mestrado em Engenharia ElectrĂłnica e TelecomunicaçÔesA Internet nunca foi pensada para suportar a multiplicidade de serviços e a quantidade de utilizadores que tem actualmente. Conjugando este facto com uma crescente exigĂȘncia quer a nĂ­vel de desempenho, quer a nĂ­vel de flexibilidade e robustez, facilmente se percebe que a arquitectura actual nĂŁo corresponde nem Ă s necessidades e exigĂȘncias dos utilizadores actuais nem dos futuros. A virtualização de rede Ă©, assim, apresentada como uma possĂ­vel solução para este problema. Ao permitir que um conjunto de redes com requisitos e arquitecturas distintos, optimizados para diferentes aplicaçÔes, partilhem uma mesma infra-estrutura e sejam independentes desta, permitirĂĄ o desenvolvimento de alternativas que minimizem ou suprimam as limitaçÔes conhecidas da Internet actual. O facto de uma mesma rede fĂ­sica poder ser utilizada para suportar mĂșltiplas redes virtuais Ă© de grande interesse para os operadores. Ao melhorar a utilização da infra-estrutura e a consolidação de recursos, Ă© possĂ­vel aumentar a rentabilidade da mesma. AlĂ©m desta mais eficiente utilização, que se traduz numa vantagem competitiva, a virtualização de rede permite o aparecimento de novos modelos de negĂłcio atravĂ©s da dissociação entre serviços e a rede fĂ­sica. Neste sentido, e no Ăąmbito do projecto 4WARD, esta dissertação propĂ”e-se a desenvolver uma plataforma de virtualização que permita a avaliação, resolução de problemas e testes referentes Ă  criação, monitorização e gestĂŁo de redes virtuais existentes numa rede fĂ­sica experimental. Foram desenvolvidas funcionalidades dinĂąmicas de monitorização de rede, atravĂ©s das quais Ă© possĂ­vel detectar situaçÔes de falhas, sobre utilização ou problemas de configuração. TambĂ©m foram desenvolvidos, simulados e implementados algoritmos distribuĂ­dos de descoberta de redes fĂ­sicas e virtuais. Na vertente de gestĂŁo da rede, foram implementados mecanismos que permitem actuar sobre os recursos virtuais. Por fim, para que a criação inteligente de redes virtuais fosse possĂ­vel e efectuada o mais rapidamente possĂ­vel, foram desenvolvidos algoritmos de mapeamento dinĂąmico de redes virtuais e optimizados os processos de criação dos respectivos nĂłs. Por forma a disponibilizar e testar as funcionalidades, foi desenvolvida uma plataforma de virtualização que fornece um ambiente grĂĄfico e que permite, de forma intuitiva, desenhar e configurar redes virtuais, monitorizar as redes existentes em tempo real e actuar sobre elas. Esta plataforma foi desenvolvida de forma modular e poderĂĄ servir como base para futuros melhoramentos e funcionalidades. Os resultados obtidos, alĂ©m de implementarem as funcionalidades desejadas e de comprovarem a escalabilidade da arquitectura e dos algoritmos propostos, provam que Ă© possĂ­vel a existĂȘncia de uma ferramenta Ășnica de gestĂŁo, monitorização e criação de redes virtuais.The Internet was never designed to support the huge amount of services and users that it has nowadays. Combined with ever-increasing requirements for performance, flexibility, and robustness, one can easily realize that the current architecture does not match neither the needs nor the demands of the current and future users. Network virtualization arises as a potential solution for these issues. By letting multiple networks, optimized for different applications with different requirements and architectures, to coexist and share the same infrastructure in an independent way, new alternatives may be developed that bypass the known limitations of the current Internet. This ability to use the same physical infrastructure to hold multiple virtual networks is of great interest for network operators. By improving its infrastructure utilization and increasing the resource consolidation, higher profitability can be achieved. Besides this competitive advantage, network virtualization enables new business models and the dissociation of the provided services from the physical network. With that goal in mind, this Thesis, in the scope of the 4WARD project, presents a virtualization platform that will enable the evaluation and solving of the inherent problems associated with the creation, monitoring and management of virtual networks, embedded in an experimental physical network. The developed dynamic monitoring features make the detection of failures, misconfigurations or overloads possible. In addition, physical and virtual network discovery mechanisms were designed, simulated and implemented. Regarding network management, acting upon virtual resources was also made possible. Finally, in order to optimize and speed-up virtual network creation, dynamic mapping algorithms and optimized node creation processes were developed. In order to provide and test the specified features, a network virtualization platform was developed containing a graphical user interface that aims to provide the users with a simple, interactive, intuitive way of designing and configuring virtual networks, as well as monitoring and managing them. The developed platform poses itself as a possible platform for future enhancements and added functionalities, due to its modular nature. The attained results, besides implementing the desired features and having proven the scalability and feasibility of the proposed algorithms, are also the evidence that the existence of a single tool to manage, monitor and create virtual networks is feasible

    BPFabric: Data Plane Programmability for Software Defined Networks

    Get PDF
    In its current form, OpenFlow, the de facto implementation of SDN, separates the network’s control and data planes allowing a central controller to alter the matchaction pipeline using a limited set of fields and actions. To support new protocols, forwarding logic, telemetry, monitoring or even middlebox-like functions the currently available programmability in SDN is insufficient. In this paper, we introduce BPFabric, a platform, protocol, and language-independent architecture to centrally program and monitor the data plane. BPFabric leverages eBPF, a platform and protocol independent instruction set to define the packet processing and forwarding functionality of the data plane. We introduce a control plane API that allows data plane functions to be deployed onthe-fly, reporting events of interest and exposing network internal state. We present a raw socket and DPDK implementation of the design, the former for large-scale experimentation using environment such as Mininet and the latter for high-performance low-latency deployments. We show through examples that functions unrealisable in OpenFlow can leverage this flexibility while achieving similar or better performance to today’s static design

    Standart-konformes Snapshotting fĂŒr SystemC Virtuelle Plattformen

    Get PDF
    The steady increase in complexity of high-end embedded systems goes along with an increasingly complex design process. We are currently still in a transition phase from Hardware-Description Language (HDL) based design towards virtual-platform-based design of embedded systems. As design complexity rises faster than developer productivity a gap forms. Restoring productivity while at the same time managing increased design complexity can also be achieved through focussing on the development of new tools and design methodologies. In most application areas, high-level modelling languages such as SystemC are used in early design phases. In modern software development Continuous Integration (CI) is used to automatically test if a submitted piece of code breaks functionality. Application of the CI concept to embedded system design and testing requires fast build and test execution times from the virtual platform framework. For this use case the ability to save a specific state of a virtual platform becomes necessary. The saving and restoring of specific states of a simulation requires the ability to serialize all data structures within the simulation models. Improving the frameworks and establishing better methods will only help to narrow the design gap, if these changes are introduced with the needs of the engineers and developers in mind. Ultimately, it is their productivity that shall be improved. The ability to save the state of a virtual platform enables developers to run longer test campaigns that can even contain randomized test stimuli. If the saved states are modifiable the developers can inject faulty states into the simulation models. This work contributes an extension to the SoCRocket virtual platform framework to enable snapshotting. The snapshotting extension can be considered a reference implementation as the utilization of current SystemC/TLM standards makes it compatible to other frameworkds. Furthermore, integrating the UVM SystemC library into the framework enables test driven development and fast validation of SystemC/TLM models using snapshots. These extensions narrow the design gap by supporting designers, testers and developers to work more efficiently.Die stetige Steigerung der KomplexitĂ€t eingebetteter Systeme geht einher mit einer ebenso steigenden KomplexitĂ€t des Entwurfsprozesses. Wir befinden uns momentan in der Übergangsphase vom Entwurf von eingebetteten Systemen basierend auf Hardware-Beschreibungssprachen hin zum Entwurf ebendieser basierend auf virtuellen Plattformen. Da die EntwurfskomplexitĂ€t rasanter steigt als die ProduktivitĂ€t der Entwickler, entsteht eine Kluft. Die ProduktivitĂ€t wiederherzustellen und gleichzeitig die gesteigerte EntwurfskomplexitĂ€t zu bewĂ€ltigen, kann auch erreicht werden, indem der Fokus auf die Entwicklung neuer Werkzeuge und Entwurfsmethoden gelegt wird. In den meisten Anwendungsgebieten werden Modellierungssprachen auf hoher Ebene, wie zum Beispiel SystemC, in den frĂŒhen Entwurfsphasen benutzt. In der modernen Software-Entwicklung wird Continuous Integration (CI) benutzt um automatisiert zu ĂŒberprĂŒfen, ob eine eingespielte Änderung am Quelltext bestehende FunktionalitĂ€ten beeintrĂ€chtigt. Die Anwendung des CI-Konzepts auf den Entwurf und das Testen von eingebetteten Systemen fordert schnelle Bau- und Test-AusfĂŒhrungszeiten von dem genutzten Framework fĂŒr virtuelle Plattformen. FĂŒr diesen Anwendungsfall wird auch die FĂ€higkeit, einen bestimmten Zustand der virtuellen Plattform zu speichern, erforderlich. Das Speichern und Wiederherstellen der ZustĂ€nde einer Simulation erfordert die Serialisierung aller Datenstrukturen, die sich in den Simulationsmodellen befinden. Das Verbessern von Frameworks und Etablieren besserer Methodiken hilft nur die Entwurfs-Kluft zu verringern, wenn diese Änderungen mit BerĂŒcksichtigung der BedĂŒrfnisse der Entwickler und Ingenieure eingefĂŒhrt werden. Letztendlich ist es ihre ProduktivitĂ€t, die gesteigert werden soll. Die FĂ€higkeit den Zustand einer virtuellen Plattform zu speichern, ermöglicht es den Entwicklern, lĂ€ngere Testkampagnen laufen zu lassen, die auch zufĂ€llig erzeugte Teststimuli beinhalten können oder, falls die gespeicherten ZustĂ€nde modifizierbar sind, fehlerbehaftete ZustĂ€nde in die Simulationsmodelle zu injizieren. Mein mit dieser Arbeit geleisteter Beitrag beinhaltet die Erweiterung des SoCRocket Frameworks um Checkpointing FunktionalitĂ€t im Sinne einer Referenzimplementierung. Weiterhin ermöglicht die Integration der UVM SystemC Bibliothek in das Framework die Umsetzung der testgetriebenen Entwicklung und schnelle Validierung von SystemC/TLM Modellen mit Hilfe von Snapshots

    Performance benchmarking physical and virtual linux envrionments

    Get PDF
    Includes bibliographical references.Virtualisation is a method of partitioning one physical computer into multiple "virtual" computers, giving each the appearance and capabilities of running on its own dedicated hardware. Each virtual system functions as a full-fledged computer and can be independently shutdown and restarted. Xen is a form of paravirtualisation developed by the University of Cambridge Computer Laboratory and is available under both a free and commercial license. Performance results comparing Xen to native Linux as well as to other virtualisation tools such as VMWare and User Mode Linux were published in the paper "Xen and the Art of Virtualization" at the Symposium on Operating Systems Principles in October 2003 by Barham et al. (2003). Clark et al. (2004) performed a similar study and produced similar results. In this thesis, a similar performance analysis of Xen is undertaken and also extended to include the performance analysis of Open VZ, an alternative open source virtualisation technology. This study made explicit use of open-source software and commodity hardware

    HARE: Final Report

    Get PDF
    This report documents the results of work done over a 6 year period under the FAST-OS programs. The first effort was called Right-Weight Kernels, (RWK) and was concerned with improving measurements of OS noise so it could be treated quantitatively; and evaluating the use of two operating systems, Linux and Plan 9, on HPC systems and determining how these operating systems needed to be extended or changed for HPC, while still retaining their general-purpose nature. The second program, HARE, explored the creation of alternative runtime models, building on RWK. All of the HARE work was done on Plan 9. The HARE researchers were mindful of the very good Linux and LWK work being done at other labs and saw no need to recreate it. Even given this limited funding, the two efforts had outsized impact: _ Helped Cray decide to use Linux, instead of a custom kernel, and provided the tools needed to make Linux perform well _ Created a successor operating system to Plan 9, NIX, which has been taken in by Bell Labs for further development _ Created a standard system measurement tool, Fixed Time Quantum or FTQ, which is widely used for measuring operating systems impact on applications _ Spurred the use of the 9p protocol in several organizations, including IBM _ Built software in use at many companies, including IBM, Cray, and Google _ Spurred the creation of alternative runtimes for use on HPC systems _ Demonstrated that, with proper modifications, a general purpose operating systems can provide communications up to 3 times as effective as user-level libraries Open source was a key part of this work. The code developed for this project is in wide use and available at many places. The core Blue Gene code is available at https://bitbucket.org/ericvh/hare. We describe details of these impacts in the following sections. The rest of this report is organized as follows: First, we describe commercial impact; next, we describe the FTQ benchmark and its impact in more detail; operating systems and runtime research follows; we discuss infrastructure software; and close with a description of the new NIX operating system, future work, and conclusions
    • 

    corecore