94 research outputs found

    HIL: designing an exokernel for the data center

    Full text link
    We propose a new Exokernel-like layer to allow mutually untrusting physically deployed services to efficiently share the resources of a data center. We believe that such a layer offers not only efficiency gains, but may also enable new economic models, new applications, and new security-sensitive uses. A prototype (currently in active use) demonstrates that the proposed layer is viable, and can support a variety of existing provisioning tools and use cases.Partial support for this work was provided by the MassTech Collaborative Research Matching Grant Program, National Science Foundation awards 1347525 and 1149232 as well as the several commercial partners of the Massachusetts Open Cloud who may be found at http://www.massopencloud.or

    Creating architecture for a digital information system leveraging virtual environments

    Get PDF
    Abstract. The topic of the thesis was the creation of a proof of concept digital information system, which utilizes virtual environments. The focus was finding a working design, which can then be expanded upon. The research was conducted using design science research, by creating the information system as the artifact. The research was conducted for Nokia Networks in Oulu, Finland; in this document referred to as “the target organization”. An information system is a collection of distributed computing components, which come together to create value for an organization. Information system architecture is generally derived from enterprise architecture, and consists of a data-, technical- and application architectures. Data architecture outlines the data that the system uses, and the policies related to its usage, manipulation and storage. Technical architecture relates to various technological areas, such as networking and protocols, as well as any environmental factors. The application architecture consists of deconstructing the applications that are used in the operations of the information system. Virtual reality is an experience, where the concepts of presence, autonomy and interaction come together to create an immersive alternative to a regular display-based computer environment. The most typical form of virtual reality consists of a headmounted device, controllers and movement-tracking base stations. The user’s head- and body movement can be tracked, which changes their position in the virtual environment. The proof-of-concept information system architecture used a multi-server -based solution, where one central physical server hosted multiple virtual servers. The system consisted of a website, which was the knowledge-center and where a client software could be downloaded. The client software was the authorization portal, which determined the virtual environments that were available to the user. The virtual reality application included functionalities, which enable co-operative, virtualized use of various Nokia products, in immersive environments. The system was tested in working situations, such as during exhibitions with customers. The proof-of-concept system fulfilled many of the functional requirements set for it, allowing for co-operation in the virtual reality. Additionally, a rudimentary model for access control was available in the designed system. The shortcomings of the system were related to areas such as security and scaling, which can be further developed by introducing a cloud-hosted environment to the architecture

    Virtualization services: scalable methods for virtualizing multicore systems

    Get PDF
    Multi-core technology is bringing parallel processing capabilities from servers to laptops and even handheld devices. At the same time, platform support for system virtualization is making it easier to consolidate server and client resources, when and as needed by applications. This consolidation is achieved by dynamically mapping the virtual machines on which applications run to underlying physical machines and their processing cores. Low cost processor and I/O virtualization methods efficiently scaled to different numbers of processing cores and I/O devices are key enablers of such consolidation. This dissertation develops and evaluates new methods for scaling virtualization functionality to multi-core and future many-core systems. Specifically, it re-architects virtualization functionality to improve scalability and better exploit multi-core system resources. Results from this work include a self-virtualized I/O abstraction, which virtualizes I/O so as to flexibly use different platforms' processing and I/O resources. Flexibility affords improved performance and resource usage and most importantly, better scalability than that offered by current I/O virtualization solutions. Further, by describing system virtualization as a service provided to virtual machines and the underlying computing platform, this service can be enhanced to provide new and innovative functionality. For example, a virtual device may provide obfuscated data to guest operating systems to maintain data privacy; it could mask differences in device APIs or properties to deal with heterogeneous underlying resources; or it could control access to data based on the ``trust' properties of the guest VM. This thesis demonstrates that extended virtualization services are superior to existing operating system or user-level implementations of such functionality, for multiple reasons. First, this solution technique makes more efficient use of key performance-limiting resource in multi-core systems, which are memory and I/O bandwidth. Second, this solution technique better exploits the parallelism inherent in multi-core architectures and exhibits good scalability properties, in part because at the hypervisor level, there is greater control in precisely which and how resources are used to realize extended virtualization services. Improved control over resource usage makes it possible to provide value-added functionalities for both guest VMs and the platform. Specific instances of virtualization services described in this thesis are the network virtualization service that exploits heterogeneous processing cores, a storage virtualization service that provides location transparent access to block devices by extending the functionality provided by network virtualization service, a multimedia virtualization service that allows efficient media device sharing based on semantic information, and an object-based storage service with enhanced access control.Ph.D.Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jim

    Power Modeling and Resource Optimization in Virtualized Environments

    Get PDF
    The provisioning of on-demand cloud services has revolutionized the IT industry. This emerging paradigm has drastically increased the growth of data centers (DCs) worldwide. Consequently, this rising number of DCs is contributing to a large amount of world total power consumption. This has directed the attention of researchers and service providers to investigate a power-aware solution for the deployment and management of these systems and networks. However, these solutions could be bene\ufb01cial only if derived from a precisely estimated power consumption at run-time. Accuracy in power estimation is a challenge in virtualized environments due to the lack of certainty of actual resources consumed by virtualized entities and of their impact on applications\u2019 performance. The heterogeneous cloud, composed of multi-tenancy architecture, has also raised several management challenges for both service providers and their clients. Task scheduling and resource allocation in such a system are considered as an NP-hard problem. The inappropriate allocation of resources causes the under-utilization of servers, hence reducing throughput and energy e\ufb03ciency. In this context, the cloud framework needs an e\ufb00ective management solution to maximize the use of available resources and capacity, and also to reduce the impact of their carbon footprint on the environment with reduced power consumption. This thesis addresses the issues of power measurement and resource utilization in virtualized environments as two primary objectives. At \ufb01rst, a survey on prior work of server power modeling and methods in virtualization architectures is carried out. This helps investigate the key challenges that elude the precision of power estimation when dealing with virtualized entities. A di\ufb00erent systematic approach is then presented to improve the prediction accuracy in these networks, considering the resource abstraction at di\ufb00erent architectural levels. Resource usage monitoring at the host and guest helps in identifying the di\ufb00erence in performance between the two. Using virtual Performance Monitoring Counters (vPMCs) at a guest level provides detailed information that helps in improving the prediction accuracy and can be further used for resource optimization, consolidation and load balancing. Later, the research also targets the critical issue of optimal resource utilization in cloud computing. This study seeks a generic, robust but simple approach to deal with resource allocation in cloud computing and networking. The inappropriate scheduling in the cloud causes under- and over- utilization of resources which in turn increases the power consumption and also degrades the system performance. This work \ufb01rst addresses some of the major challenges related to task scheduling in heterogeneous systems. After a critical analysis of existing approaches, this thesis presents a rather simple scheduling scheme based on the combination of heuristic solutions. Improved resource utilization with reduced processing time can be achieved using the proposed energy-e\ufb03cient scheduling algorithm

    Implementation of a NFV monitoring system for reactive environments

    Get PDF
    This work aims at researching the existent solutions of monitoring and alerting techniques, as well as defining a suitable architecture, design and implementation of a complete and customizable monitoring and alerting framework used to inspect and notify specific conditions on dynamically instantiated applications operating in the network. Such Network Services (NS) are used in the Network Function Virtualization (NFV) architecture, allowing rapid instantiation and configuration of virtualized environments that handle network configuration. This design and implementation seek to provide more flexibility and dynamicity to the network operator to monitor custom or generic metrics and trigger notifications based on custom thresholds, without depending on the Virtual Network Function (VNF) developer to adapt its descriptor and onboard each version into the NFV Orchestrator (NFVO) prior to each usage. The framework here developed follows a modular architecture that separates the monitoring and alerting policies from the onboarding and instantiation process of the Network Functions. The architecture also facilitates the integration with other systems and adapting the functionality of an operational environment thanks to its decoupled and modular approach. The presented work considers a monitoring and alerting framework that is especially useful for dynamic environments such as those relying in NFV, like those in the EU H2020 PALANTIR project. There, the framework is used to help assessing the correct behavior of the Security NSs that are used to prevent or mitigate security anomalies in the network of each client. If abnormalities are found, remediation measures will take place to replace the potentially compromised NS instances with clean, appropriate ones.Objectius de Desenvolupament Sostenible::9 - IndĂșstria, InnovaciĂł i Infraestructur

    Understanding and Leveraging Virtualization Technology in Commodity Computing Systems

    Get PDF
    Commodity computing platforms are imperfect, requiring various enhancements for performance and security purposes. In the past decade, virtualization technology has emerged as a promising trend for commodity computing platforms, ushering many opportunities to optimize the allocation of hardware resources. However, many abstractions offered by virtualization not only make enhancements more challenging, but also complicate the proper understanding of virtualized systems. The current understanding and analysis of these abstractions are far from being satisfactory. This dissertation aims to tackle this problem from a holistic view, by systematically studying the system behaviors. The focus of our work lies in performance implication and security vulnerabilities of a virtualized system.;We start with the first abstraction---an intensive memory multiplexing for I/O of Virtual Machines (VMs)---and present a new technique, called Batmem, to effectively reduce the memory multiplexing overhead of VMs and emulated devices by optimizing the operations of the conventional emulated Memory Mapped I/O in hypervisors. Then we analyze another particular abstraction---a nested file system---and attempt to both quantify and understand the crucial aspects of performance in a variety of settings. Our investigation demonstrates that the choice of a file system at both the guest and hypervisor levels has significant impact upon I/O performance.;Finally, leveraging utilities to manage VM disk images, we present a new patch management framework, called Shadow Patching, to achieve effective software updates. This framework allows system administrators to still take the offline patching approach but retain most of the benefits of live patching by using commonly available virtualization techniques. to demonstrate the effectiveness of the approach, we conduct a series of experiments applying a wide variety of software patches. Our results show that our framework incurs only small overhead in running systems, but can significantly reduce maintenance window

    Beyond The Cloud, How Should Next Generation Utility Computing Infrastructures Be Designed?

    Get PDF
    To accommodate the ever-increasing demand for Utility Computing (UC) resources, while taking into account both energy and economical issues, the current trend consists in building larger and larger data centers in a few strategic locations. Although such an approach enables to cope with the actual demand while continuing to operate UC resources through centralized software system, it is far from delivering sustainable and efficient UC infrastructures. We claim that a disruptive change in UC infrastructures is required: UC resources should be managed differently, considering locality as a primary concern. We propose to leverage any facilities available through the Internet in order to deliver widely distributed UC platforms that can better match the geographical dispersal of users as well as the unending demand. Critical to the emergence of such locality-based UC (LUC) platforms is the availability of appropriate operating mechanisms. In this paper, we advocate the implementation of a unified system driving the use of resources at an unprecedented scale by turning a complex and diverse infrastructure into a collection of abstracted computing facilities that is both easy to operate and reliable. By deploying and using such a LUC Operating System on backbones, our ultimate vision is to make possible to host/operate a large part of the Internet by its internal structure itself: A scalable and nearly infinite set of resources delivered by any computing facilities forming the Internet, starting from the larger hubs operated by ISPs, government and academic institutions to any idle resources that may be provided by end-users. Unlike previous researches on distributed operating systems, we propose to consider virtual machines (VMs) instead of processes as the basic element. System virtualization offers several capabilities that increase the flexibility of resources management, allowing to investigate novel decentralized schemes.Afin de supporter la demande croissante de calcul utilitaire (UC) tout en prenant en compte les aspects Ă©nergĂ©tique et financier, la tendance actuelle consiste Ă  construire des centres de donnĂ©es (ou centrales numĂ©riques) de plus en plus grands dans un nombre limitĂ© de lieux stratĂ©giques. Cette approche permet sans aucun doute de satisfaire la demande tout en conservant une approche centralisĂ©e de la gestion de ces ressources mais elle reste loin de pouvoir fournir des infrastructures de calcul utilitaire efficaces et durables. AprĂšs avoir indiquĂ© pourquoi cette tendance n'est pas appropriĂ©e, nous proposons au travers de ce rapport, une proposition radicalement diffĂ©rente. De notre point de vue, les ressources de calcul utilitaire doivent ĂȘtre gĂ©rĂ©es de maniĂšre Ă  pouvoir prendre en compte la localitĂ© des demandes dĂšs le dĂ©part. Pour ce faire, nous proposons de tirer parti de tous les Ă©quipements disponibles sur l'Internet afin de fournir des infrastructures de calcul utilitaire qui permettront de part leur distribution de prendre en compte plus efficacement la dispersion gĂ©ographique des utilisateurs et leur demande toujours croissante. Un des aspects critique pour l'Ă©mergence de telles plates-formes de calcul utilitaire ''local'' (LUC) est la disponibilitĂ© de mĂ©canismes de gestion appropriĂ©s. Dans la deuxiĂšme partie de ce document, nous dĂ©fendons la mise en oeuvre d'un systĂšme unifiĂ© gĂ©rant l'utilisation des ressources Ă  une Ă©chelle sans prĂ©cĂ©dent en transformant une infrastructure complexe et hĂ©tĂ©rogĂšne en une collection d'Ă©quipements virtualisĂ©s qui seront Ă  la fois plus simples Ă  gĂ©rer et plus sĂ»rs. En dĂ©ployant un systĂšme de type LUC sur les coeurs de rĂ©seau, notre vision ultime est de rendre possible l'hĂ©bergement et la gestion de l'Internet sur sa propre infrastructure interne: un ensemble de ressources extensible et quasiment infini fourni par n'importe quel Ă©quipement constituant l'Internet, partant des gros noeud rĂ©seaux gĂ©rĂ©s par les ISPs, les gouvernements et les institutions acadĂšmiques jusqu'Ă  n'importe quelle ressource inactive fournie par les utilisateurs finaux. Contrairement aux approches prĂ©cĂ©dentes appliquĂ©es aux systĂšmes distribuĂ©s, nous proposons de considĂ©rer les machines virtuelles comme la granularitĂ© Ă©lĂ©mentaire du systĂšme (Ă  la place des processus). La virtualisation systĂšme offre plusieurs fonctionnalitĂ©s qui amĂ©liorent la flexibilitĂ© de la gestion de ressources, permettant l'Ă©tude de nouveaux schĂ©mas de dĂ©centralisation

    Performance analysis of multi-institutional data sharing in the Clouds4Coordination system

    Get PDF
    Cloud computing is used extensively in Architecture/ Engineering/ Construction projects for storing data and running simulations on building models (e.g. energy efficiency/environmental impact). With the emergence of multi-Clouds it has become possible to link such systems and create a distributed cloud environment. A multi-Cloud environment enables each organisation involved in a collaborative project to maintain its own computational infrastructure/ system (with the associated data), and not have to migrate to a single cloud environment. Such infrastructure becomes efficacious when multiple individuals and organisations work collaboratively, enabling each individual/ organisation to select a computational infrastructure that most closely matches its requirements. We describe the “Clouds-for-Coordination” system, and provide a use case to demonstrate how such a system can be used in practice. A performance analysis is carried out to demonstrate how effective such a multi-Cloud system can be, reporting “aggregated-time-to-complete” metric over a number of different scenarios

    Network virtualization in next-generation cellular networks: a spectrum pooling approach

    Get PDF
    The hardship of expanding the cellular network market results from the tremendous high cost of mobile infrastructure, i.e. the capital expenditures (CAPEX) and the operational expenditures (OPEX). Spectrum Sharing is one of the proposed solution for the high-cost of scalability of cellular networks. However, most of the proposed spectrum pooling frameworks in the literature are mostly approached from a technical view besides there are no good cost models based on real datasets for quantifying the circumstances under which sharing the spectrum and network resources would be beneficial to mobile operators. In this thesis, by studying different sharing scenarios in a fiber-based backhaul mobile network, we assess the incentives for service providers (SPs) to share spectrum/infrastructure in different cellular market areas/economic areas (CMA/BEAs) with different population density, allocated bandwidth (BW), spectrum bid values and considering different network topologies. Moreover, we look at the technical problem of sharing the spectrum between two SPs sharing the same basestation (BS), yet they have different traffic demand as well as different QoS constraints. We design a resource allocation scheme to provision real-time (RT), non-real-time (NRT) as well as Ultra-reliable Low Latency Communications (URLLC) traffic in a single shared BS scenario such that SPs achieve isolation, fairness and enforce their QoS constraints. Finally, we exploit spectrum pooling to develop an approach for dynamically re-configuring the base stations that survive a disaster and are powered by a microgrid to form a multi-hop mesh network in order to provide local cellular service
    • 

    corecore