498 research outputs found

    Cloud Abstraction Libraries: Implementation and Comparison

    Get PDF
    Vendor lock-in makes it difficult for an organization to port their services, application or data. Cloud providers are in race to provide the best-in-class storage, networking and compute resources. Many organizations are moving towards micro-services and cloud services architecture. It is very important for an infrastructure platform to offer a high-quality cloud computing environment consistently across multiple cloud platforms. To enable this, a collaborative yet an independent cloud abstraction service is required. The cloud abstraction library should support the basic use cases of delivery pipeline, service management, cloud operations and security service. Cloud interoperability standards helps to improve availability and scalability by providing cross organizational or vendor independent projects. An important aspect of cloud interoperability is development of standardized APIs to send and receive data, irrespective of the underlying cloud implementation. Cloud interoperability helps application and data portability between public clouds and private clouds. This thesis explores the role of open source libraries to use cloud specific features. Our work is to qualitatively and quantitatively evaluate Dasein cloud and jClouds against Amazon EC2 and Google Compute Engine. We believe that cloud standardization can be accelerated by implementations based on open source and open standards

    Distributed Environment for Efficient Virtual Machine Image Management in Federated Cloud Architectures

    Get PDF
    The use of Virtual Machines (VM) in Cloud computing provides various benefits in the overall software engineering lifecycle. These include efficient elasticity mechanisms resulting in higher resource utilization and lower operational costs. VM as software artifacts are created using provider-specific templates, called VM images (VMI), and are stored in proprietary or public repositories for further use. However, some technology specific choices can limit the interoperability among various Cloud providers and bundle the VMIs with nonessential or redundant software packages, leading to increased storage size, prolonged VMI delivery, stagnant VMI instantiation and ultimately vendor lock-in. To address these challenges, we present a set of novel functionalities and design approaches for efficient operation of distributed VMI repositories, specifically tailored for enabling: (i) simplified creation of lightweight and size optimized VMIs tuned for specific application requirements; (ii) multi-objective VMI repository optimization; and (iii) efficient reasoning mechanism to help optimizing complex VMI operations. The evaluation results confirm that the presented approaches can enable VMI size reduction by up to 55%, while trimming the image creation time by 66%. Furthermore, the repository optimization algorithms, can reduce the VMI delivery time by up to 51% and cut down the storage expenses by 3%. Moreover, by implementing replication strategies, the optimization algorithms can increase the system reliability by 74%

    Prebaked µVMs: Scalable, Instant VM Startup for IaaS Clouds

    Get PDF
    Abstract-IaaS clouds promise instantaneously available resources to elastic applications. In practice, however, virtual machine (VM) startup times are in the order of several minutes, or at best, several tens of seconds, negatively impacting the elasticity of applications like Web servers that need to scale out to handle dynamically increasing load. VM startup time is strongly influenced by booting the VM's operating system. In this work, we propose using so-called prebaked µVMs to speed up VM startup. µVMs are snapshots of minimal VMs that can be quickly resumed and then configured to application needs by hot-plugging resources. To serve µVMs, we extend our VM boot cache service, Squirrel, allowing to store µVMs for large numbers of VM images on the hosts of a data center. Our experiments show that µVMs can start up in less than one second on a standard file system. Using 1000+ VM images from a production cloud, we show that the respective µVMs can be stored in a compressed and deduplicated file system within 50 GB storage per host, while starting up within 2-3 seconds on average

    Energy-efficient Transitional Near-* Computing

    Get PDF
    Studies have shown that communication networks, devices accessing the Internet, and data centers account for 4.6% of the worldwide electricity consumption. Although data centers, core network equipment, and mobile devices are getting more energy-efficient, the amount of data that is being processed, transferred, and stored is vastly increasing. Recent computer paradigms, such as fog and edge computing, try to improve this situation by processing data near the user, the network, the devices, and the data itself. In this thesis, these trends are summarized under the new term near-* or near-everything computing. Furthermore, a novel paradigm designed to increase the energy efficiency of near-* computing is proposed: transitional computing. It transfers multi-mechanism transitions, a recently developed paradigm for a highly adaptable future Internet, from the field of communication systems to computing systems. Moreover, three types of novel transitions are introduced to achieve gains in energy efficiency in near-* environments, spanning from private Infrastructure-as-a-Service (IaaS) clouds, Software-defined Wireless Networks (SDWNs) at the edge of the network, Disruption-Tolerant Information-Centric Networks (DTN-ICNs) involving mobile devices, sensors, edge devices as well as programmable components on a mobile System-on-a-Chip (SoC). Finally, the novel idea of transitional near-* computing for emergency response applications is presented to assist rescuers and affected persons during an emergency event or a disaster, although connections to cloud services and social networks might be disturbed by network outages, and network bandwidth and battery power of mobile devices might be limited

    Building Computing-As-A-Service Mobile Cloud System

    Get PDF
    The last five years have witnessed the proliferation of smart mobile devices, the explosion of various mobile applications and the rapid adoption of cloud computing in business, governmental and educational IT deployment. There is also a growing trends of combining mobile computing and cloud computing as a new popular computing paradigm nowadays. This thesis envisions the future of mobile computing which is primarily affected by following three trends: First, servers in cloud equipped with high speed multi-core technology have been the main stream today. Meanwhile, ARM processor powered servers is growingly became popular recently and the virtualization on ARM systems is also gaining wide ranges of attentions recently. Second, high-speed internet has been pervasive and highly available. Mobile devices are able to connect to cloud anytime and anywhere. Third, cloud computing is reshaping the way of using computing resources. The classic pay/scale-as-you-go model allows hardware resources to be optimally allocated and well-managed. These three trends lend credence to a new mobile computing model with the combination of resource-rich cloud and less powerful mobile devices. In this model, mobile devices run the core virtualization hypervisor with virtualized phone instances, allowing for pervasive access to more powerful, highly-available virtual phone clones in the cloud. The centralized cloud, powered by rich computing and memory recourses, hosts virtual phone clones and repeatedly synchronize the data changes with virtual phone instances running on mobile devices. Users can flexibly isolate different computing environments. In this dissertation, we explored the opportunity of leveraging cloud resources for mobile computing for the purpose of energy saving, performance augmentation as well as secure computing enviroment isolation. We proposed a framework that allows mo- bile users to seamlessly leverage cloud to augment the computing capability of mobile devices and also makes it simpler for application developers to run their smartphone applications in the cloud without tedious application partitioning. This framework was built with virtualization on both server side and mobile devices. It has three building blocks including agile virtual machine deployment, efficient virtual resource management, and seamless mobile augmentation. We presented the design, imple- mentation and evaluation of these three components and demonstrated the feasibility of the proposed mobile cloud model

    YOLO: Accélération du temps de démarrage de la machine virtuelleen réduisant les opérations d’I/O

    Get PDF
    Several works have shown that the time to boot one virtual machine (VM) can last up to a fewminutes in high consolidated cloud scenarios. This time is critical as VM boot duration defines how anapplication can react w.r.t. demands’ fluctuations (horizontal elasticity). To limit as much as possible thetime to boot a VM, we design the YOLO mechanism (You Only Load Once). YOLO optimizes the numberof I/O operations generated during a VM boot process by relying on the boot image abstraction, a subsetof the VM image (VMI) that contains data blocks necessary to complete the boot operation. Whenevera VM is booted, YOLO intercepts all read accesses and serves them directly from the boot image, whichhas been locally stored on fast access storage devices (e.g., memory, SSD, etc.). Creating boot imagesfor 900+ VMIs from Google Cloud shows that only 40 GB is needed to store all the mandatory data.Experiments show that YOLO can speed up VM boot duration 2-13 times under different resourcescontention with a negligible overhead on the I/O path. Finally, we underline that although YOLO hasbeen validated with a KVM environment, it does not require any modification on the hypervisor, theguest kernel nor the VM image (VMI) structure and can be used for several kinds of VMIs (in this study,Linux and Windows VMIs have been tested)Plusieurs travaux ont montré que le temps de démarrage d’une machine virtuelle (VM)peut s’étale sur plusieurs minutes dans des scénarios fortement consolidés. Ce délai est critique car ladurée de démarrage d’une VM définit la réactivité d’une application en fonction des fluctuations decharge (élasticité horizontale). Pour limiter au maximum le temps de démarrage d’une VM, nous avonsconçu le mécanisme YOLO (You Only Load Once). YOLO optimise le nombre d’opérations “disque”générées pendant le processus de démarrage. Pour ce faire, il s’appuie sur une nouvelle abstractionintitulée “image de démarrage” et correspondant à un sous-ensemble des données de l’image de la VM.Chaque fois qu’une machine virtuelle est démarrée, YOLO intercepte l’ensemble des accès en lectureafin de les satisfaire directement à partir de l’image de démarrage, qui a été stockée préalablement surdes périphériques de stockage à accès rapide (par exemple, mémoire, SSD, etc.). La création d’imagede démarrage pour les 900 types des VMs proposées sur l’infrastructure Cloud de Google représenteseulement 40 Go, ce qui est une quantité de données qui peut tout à fait être stockée sur chacundes noeuds de calculs. Les expériences réalisées montrent que YOLO permet accélérer la durée dedémarrage d’un facteur allant de 2 à 13 selon les différents scénarios de consolidation. Nous soulignonsque bien que YOLO ait été validé avec un environnement KVM, il ne nécessite aucune modificatfionsur l’hyperviseur, le noyau invité ou la structure d’image de la VM et peut donc être utilisé pourplusieurs types d’images (dans cette étude, nous testons des images Linux et Windows)

    Virtualization services: scalable methods for virtualizing multicore systems

    Get PDF
    Multi-core technology is bringing parallel processing capabilities from servers to laptops and even handheld devices. At the same time, platform support for system virtualization is making it easier to consolidate server and client resources, when and as needed by applications. This consolidation is achieved by dynamically mapping the virtual machines on which applications run to underlying physical machines and their processing cores. Low cost processor and I/O virtualization methods efficiently scaled to different numbers of processing cores and I/O devices are key enablers of such consolidation. This dissertation develops and evaluates new methods for scaling virtualization functionality to multi-core and future many-core systems. Specifically, it re-architects virtualization functionality to improve scalability and better exploit multi-core system resources. Results from this work include a self-virtualized I/O abstraction, which virtualizes I/O so as to flexibly use different platforms' processing and I/O resources. Flexibility affords improved performance and resource usage and most importantly, better scalability than that offered by current I/O virtualization solutions. Further, by describing system virtualization as a service provided to virtual machines and the underlying computing platform, this service can be enhanced to provide new and innovative functionality. For example, a virtual device may provide obfuscated data to guest operating systems to maintain data privacy; it could mask differences in device APIs or properties to deal with heterogeneous underlying resources; or it could control access to data based on the ``trust' properties of the guest VM. This thesis demonstrates that extended virtualization services are superior to existing operating system or user-level implementations of such functionality, for multiple reasons. First, this solution technique makes more efficient use of key performance-limiting resource in multi-core systems, which are memory and I/O bandwidth. Second, this solution technique better exploits the parallelism inherent in multi-core architectures and exhibits good scalability properties, in part because at the hypervisor level, there is greater control in precisely which and how resources are used to realize extended virtualization services. Improved control over resource usage makes it possible to provide value-added functionalities for both guest VMs and the platform. Specific instances of virtualization services described in this thesis are the network virtualization service that exploits heterogeneous processing cores, a storage virtualization service that provides location transparent access to block devices by extending the functionality provided by network virtualization service, a multimedia virtualization service that allows efficient media device sharing based on semantic information, and an object-based storage service with enhanced access control.Ph.D.Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jim

    An efficient use of virtualization in grid/cloud environments

    Get PDF
    Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational resources. Grid enables access to the resources but it does not guarantee any quality of service. Moreover, Grid does not provide performance isolation; job of one user can influence the performance of other user's job. The other problem with Grid is that the users of Grid belong to scientific community and the jobs require specific and customized software environment. Providing the perfect environment to the user is very difficult in Grid for its dispersed and heterogeneous nature. Though, Cloud computing provide full customization and control, but there is no simple procedure available to submit user jobs as in Grid. The Grid computing can provide customized resources and performance to the user using virtualization. A virtual machine can join the Grid as an execution node. The virtual machine can also be submitted as a job with user jobs inside. Where the first method gives quality of service and performance isolation, the second method also provides customization and administration in addition. In this thesis, a solution is proposed to enable virtual machine reuse which will provide performance isolation with customization and administration. The same virtual machine can be used for several jobs. In the proposed solution customized virtual machines join the Grid pool on user request. Proposed solution describes two scenarios to achieve this goal. In first scenario, user submits their customized virtual machine as a job. The virtual machine joins the Grid pool when it is powered on. In the second scenario, user customized virtual machines are preconfigured in the execution system. These virtual machines join the Grid pool on user request. Condor and VMware server is used to deploy and test the scenarios. Condor supports virtual machine jobs. The scenario 1 is deployed using Condor VM universe. The second scenario uses VMware-VIX API for scripting powering on and powering off of the remote virtual machines. The experimental results shows that as scenario 2 does not need to transfer the virtual machine image, the virtual machine image becomes live on pool more faster. In scenario 1, the virtual machine runs as a condor job, so it easy to administrate the virtual machine. The only pitfall in scenario 1 is the network traffic

    Challenges and complexities in application of LCA approaches in the case of ICT for a sustainable future

    Get PDF
    In this work, three of many ICT-specific challenges of LCA are discussed. First, the inconsistency versus uncertainty is reviewed with regard to the meta-technological nature of ICT. As an example, the semiconductor technologies are used to highlight the complexities especially with respect to energy and water consumption. The need for specific representations and metric to separately assess products and technologies is discussed. It is highlighted that applying product-oriented approaches would result in abandoning or disfavoring of new technologies that could otherwise help toward a better world. Second, several believed-untouchable hot spots are highlighted to emphasize on their importance and footprint. The list includes, but not limited to, i) User Computer-Interfaces (UCIs), especially screens and displays, ii) Network-Computer Interlaces (NCIs), such as electronic and optical ports, and iii) electricity power interfaces. In addition, considering cross-regional social and economic impacts, and also taking into account the marketing nature of the need for many ICT's product and services in both forms of hardware and software, the complexity of End of Life (EoL) stage of ICT products, technologies, and services is explored. Finally, the impact of smart management and intelligence, and in general software, in ICT solutions and products is highlighted. In particular, it is observed that, even using the same technology, the significance of software could be highly variable depending on the level of intelligence and awareness deployed. With examples from an interconnected network of data centers managed using Dynamic Voltage and Frequency Scaling (DVFS) technology and smart cooling systems, it is shown that the unadjusted assessments could be highly uncertain, and even inconsistent, in calculating the management component's significance on the ICT impacts.Comment: 10 pages. Preprint/Accepted of a paper submitted to the ICT4S Conferenc

    An innovative approach to performance metrics calculus in cloud computing environments: a guest-to-host oriented perspective

    Get PDF
    In virtualized systems, the task of profiling and resource monitoring is not straight-forward. Many datacenters perform CPU overcommittment using hypervisors, running multiple virtual machines on a single computer where the total number of virtual CPUs exceeds the total number of physical CPUs available. From a customer point of view, it could be indeed interesting to know if the purchased service levels are effectively respected by the cloud provider. The innovative approach to performance profiling described in this work is based on the use of virtual performance counters, only recently made available by some hypervisors to their virtual machines, to implement guest-wide profiling. Although it isn't possible for the virtual machine to access Virtual Machine Monitor, with this method it is able to gather interesting informations to deduce the state of resource overcommittment of the virtualization host where it is executed. Tests have been carried out inside the compute nodes of FIWARE Genoa Node, an instance of a widely distributed federated community cloud, based on OpenStack and KVM. AgiLab-DITEN, the laboratory I belonged to and where I conducted my studies, together with TnT-Lab\u2013DITEN and CNIT-GE-Unit designed, installed and configured the whole Genoa Node, that was hosted on DITEN-UniGE equipment rooms. All the software measuring instruments, operating systems and programs used in this research are publicly available and free, and can be easily installed in a micro instance of virtual machine, rapidly deployable also in public clouds
    • …
    corecore