251 research outputs found
Foundations of efficient virtual appliance based service deployments
The use of virtual appliances could provide a flexible solution to services
deployment. However, these solutions suffer from several disadvantages: (i)
the slow deployment time of services in virtual machines, and (ii) virtual appliances crafted by developers tend to be inefficient for deployment purposes.
Researchers target problem (i) by advancing virtualization technologies or
by introducing virtual appliance caches on the virtual machine monitor hosts.
Others aim at problem (ii) by providing solutions for virtual appliance construction, however these solutions require deep knowledge about the service
dependencies and its deployment process.
This dissertation aids problem (i) with a virtual appliance distribution
technique that first identifies appliance parts and their internal dependencies. Then based on service demand it efficiently distributes the identified
parts to virtual appliance repositories. Problem (ii) is targeted with the Automated Virtual appliance creation Service (AVS) that can extract and publish
an already deployed service by the developer. This recently acquired virtual
appliance is optimized for service deployment time with the proposed
virtual appliance optimization facility that utilizes active fault injection to
remove the non-functional parts of the appliance. Finally, the investigation
of appliance distribution and optimization techniques resulted the definition
of the minimal manageable virtual appliance that is capable of updating and
configuring its executor virtual machine.
The deployment time reduction capabilities of the proposed techniques
were measured with several services provided in virtual appliances on three
cloud infrastructures. The appliance creation capabilities of the AVS are compared to the already available virtual appliances offered by the various online
appliance repositories. The results reveal that the introduced techniques
significantly decrease the deployment time of virtual appliance based deployment systems. As a result these techniques alleviated one of the major
obstacles before virtual appliance based deployment systems
Virtual appliance size optimization with active fault injection
Virtual appliances store the required information to instantiate a functional Virtual Machine (VM) on Infrastructure as a Service (IaaS) cloud systems. Large appliance size obstructs IaaS systems to deliver dynamic and scalable infrastructures according to their promise. To overcome this issue, this paper offers a novel technique for virtual appliance developers to publish appliances for the dynamic environments of IaaS systems. Our solution achieves faster virtual machine instantiation by reducing the appliance size while maintaining its key functionality. The new virtual appliance optimization algorithm identifies the removable parts of the appliance. Then, it applies active fault injection to remove the identified parts. Afterward, our solution assesses the functionality of the reduced virtual appliance by applying the-appliance developer provided-validation algorithms. We also introduce a technique to parallelize the fault injection and validation phases of the algorithm. Finally, the prototype implementation of the algorithm is discussed to demonstrate the efficiency of the proposed algorithm through the optimization of two well-known virtual appliances. Results show that the algorithm significantly decreased virtual machine instantiation time and increased dynamism in IaaS systems. © 2012 IEEE
An architecture to stimulate behavioral development of academic cloud users
Academic cloud infrastructures are constructed and maintained so they minimally constrain their users. Since they are free and do not limit usage patterns, academics developed such behavior that jeopardizes fair and flexible resource provisioning. For efficiency, related work either explicitly limits user access to resources, or introduce automatic rationing techniques. Surprisingly, the root cause (i.e., the user behavior) is disregarded by these approaches. This article compares academic cloud user behavior to its commercial equivalent. We deduce, that academics should behave like commercial cloud users to relieve resource provisioning. To encourage commercial like behavior, we propose an architectural extension to existing academic infrastructure clouds. First, every user's energy consumption and efficiency is monitored. Then, energy efficiency based leader boards are used to ignite competition between academics and reveal their worst practices. Leader boards are not sufficient to completely change user behavior. Thus, we introduce engaging options that encourage academics to delay resource requests and prefer resources more suitable for the infrastructure's internal provisioning. Finally, we evaluate our extensions via a simulation using real life academic resource request traces. We show a potential resource utilization reduction (by the factor of at most 2.6) while maintaining the unlimited nature of academic clouds. © 2014 Elsevier Inc
Fostering energy-awareness in scientific cloud users
© 2014 IEEE.Academic cloud infrastructures are constructed and maintained so they minimally constrain their users. Since they are free and do not limit usage patterns, academics developed such behavior that jeopardizes fair and flexible resource provisioning. For efficiency, related work either explicitly limits user access to resources, or introduces automatic rationing techniques. Surprisingly, the root cause (i.e., the user behavior) is disregarded by these approaches. This paper compares academic cloud user behavior to its commercial equivalent. We deduce, that academics should behave like commercial cloud users to relieve resource provisioning. To encourage this behavior, we propose an architectural extension to academic infrastructure clouds. We evaluate our extension via a simulation using real life academic resource request traces. We show a potential resource usage reduction while maintaining the unlimited nature of academic clouds
Cloud Workload Prediction by Means of Simulations
Clouds hide the complexity of maintaining a physical infrastructure with a disadvantage: they also hide their internal workings. Should users need to know about these details e.g., to increase the reliability or performance of their applications, they would need to detect slight behavioural changes in the underlying system. Existing solutions for such purposes offer limited capabilities. This paper proposes a technique for predicting background workload by means of simulations that are providing knowledge of the underlying clouds to support activities like cloud orchestration or workflow enactment. We propose these predictions to select more suitable execution environments for scientific workflows. We validate the proposed prediction approach with a biochemical application
An SLA-based resource virtualization approach for on-demand service provision
Cloud computing is a newly emerged research infrastructure that builds on the latest achievements of diverse research areas, such as Grid computing, Service-oriented computing, business processes and virtualization. In this paper we present an architecture for SLA-based resource virtualization that provides an extensive solution for executing user applications in Clouds. This work represents the first attempt to combine SLA-based resource negotiations with virtualized resources in terms of on-demand service provision resulting in a holistic virtualization approach. The architecture description focuses on three topics: agreement negotiation, service brokering and deployment using virtualization. The contribution is also demonstrated with a real-world case study
An interoperable and self-adaptive approach for SLA-based service virtualization in heterogeneous Cloud environments
Cloud computing is a newly emerged computing infrastructure that builds on the latest achievements of diverse research areas, such as Grid computing, Service-oriented computing, business process management and virtualization. An important characteristic of Cloud-based services is the provision of non-functional guarantees in the form of Service Level Agreements (SLAs), such as guarantees on execution time or price. However, due to system malfunctions, changing workload conditions, hard- and software failures, established SLAs can be violated. In order to avoid costly SLA violations, flexible and adaptive SLA attainment strategies are needed. In this paper we present a self-manageable architecture for SLA-based service virtualization that provides a way to ease interoperable service executions in a diverse, heterogeneous, distributed and virtualized world of services. We demonstrate in this paper that the combination of negotiation, brokering and deployment using SLA-aware extensions and autonomic computing principles are required for achieving reliable and efficient service operation in distributed environments. © 2012 Elsevier B.V. All rights reserved
Towards efficient virtual appliance delivery with minimal manageable virtual appliances
Infrastructure as a Service systems use virtual appliances to initiate virtual machines. As virtual appliances encapsulate applications and services with their support environment, their delivery is the most expensive task of the virtual machine creation. Virtual appliance delivery is a well-discussed topic in the field of cloud computing. However, for high efficiency, current techniques require the modification of the underlying IaaS systems. To target the wider adoptability of these delivery solutions, this article proposes the concept of minimal manageable virtual appliances (MMVA) that are capable of updating and configuring their virtual machines without the need to modify IaaS systems. To create MMVAs, we propose to reduce manageable virtual appliances until they become MMVAs. This research also reveals a methodology for appliance developers to incorporate MMVAs in their own appliances to enable their efficient delivery and wider adoptability. Finally, the article evaluates the positive effects of MMVAs on an already existing delivery solution: the Automated Virtual appliance creation Service (AVS). Through experimental evaluation, we present that the application of MMVAs not only increases the adoptability of a delivery solution but it also significantly improves its performance in highly dynamic systems. © 2013 IEEE
Automatic service deployment using virtualisation
Manual deployment of the application usually requires expertise both about the underlying system and the application. Automatic service deployment can improve deployment significantly by using on-demand deployment and self-healing services. To support these features this paper describes an extension the Globus Workspace Service [10]. This extension includes creating virtual appliances for Grid services, service deployment from a repository, and influencing the service schedules by altering execution planning services, candidate set generators or information systems. © 2008 IEEE
Fostering energy-awareness in simulations behind scientific workflow management systems
© 2014 IEEE.Scientific workflow management systems face a new challenge in the era of cloud computing. The past availability of rich information regarding the state of the used infrastructures is gone. Thus, organising virtual infrastructures so that they not only support the workflow being executed, but also optimise for several service level objectives (e.g., Maximum energy consumption limit, cost, reliability, availability) become dependent on good infrastructure modelling and prediction techniques. While simulators have been successfully used in the past to aid research on such workflow management systems, the currently available cloud related simulation toolkits suffer form several issues (e.g., Scalability, narrow scope) that hinder their applicability. To address this need, this paper introduces techniques for unifying two existing simulation toolkits by first analysing the problems with the current simulators, and then by illustrating the problems faced by workflow systems through the example of the ASKALON environment. Finally, we show how the unification of the selected simulators improve on the the discussed problems
- …