9 research outputs found
Special issue on algorithms for the resource management of large scale infrastructures
Modern distributed systems are becoming increasingly complex as virtualization is being applied at both the levels of computing and networking. Consequently, the resource management of this infrastructure requires innovative and efficient solutions. This issue is further exacerbated by the unpredictable workload of modern applications and the need to limit the global energy consumption. The purpose of this special issue is to present recent advances and emerging solutions to address the challenge of resource management in the context of modern large-scale infrastructures. We believe that the four papers that we selected present an up-to-date view of the emerging trends, and the papers propose innovative solutions to support efficient and self-managing systems that are able to adapt, manage, and cope with changes derived from continually changing workload and application deployment settings, without the need for human supervision
Redundant VoD Streaming Service in a Private Cloud: Availability Modeling and Sensitivity Analysis
For several years cloud computing has been generating
considerable debate and interest within IT corporations.
Since cloud computing environments provide storage and processing
systems that are adaptable, efficient, and straightforward,
thereby enabling rapid infrastructure modifications to be made
according to constantly varying workloads, organizations of
every size and type are migrating to web-based cloud supported
solutions. Due to the advantages of the pay-per-use model and
scalability factors, current video on demand (VoD) streaming
services rely heavily on cloud infrastructures to offer a
large variety of multimedia content. Recent well documented
failure events in commercial VoD services have demonstrated
the fundamental importance of maintaining high availability in
cloud computing infrastructures, and hierarchical modeling has
proved to be a useful tool for evaluating the availability of
complex systems and services. This paper presents an availability
model for a video streaming service deployed in a private
cloud environment which includes redundancy mechanisms in
the infrastructure. Differential sensitivity analysis was applied
to identify and rank the critical components of the system
with respect to service availability. The results demonstrate that
such a modeling strategy combined with differential sensitivity
analysis can be an attractive methodology for identifying which
components should be supported with redundancy in order to
consciously increase system dependability
Modeling and analysis of high availability techniques in a virtualized system
Availability evaluation of a virtualized system is critical to the wide deployment of cloud computing services. Time-based, prediction-based rejuvenation of virtual machines (VM) and virtual machine monitors, VM failover and live VM migration are common high-availability (HA) techniques in a virtualized system. This paper investigates the effect of combination of these availability techniques on VM availability in a virtualized system where various software and hardware failures may occur. For each combination, we construct analytic models rejuvenation mechanisms to improve VM availability; (2) prediction-based rejuvenation enhances VM availability much more than time-based VM rejuvenation when prediction successful probability is above 70%, regardless failover and/or live VM migration is also deployed; (3) failover mechanism outperforms live VM migration, although they can work together for higher availability of VM. In addition, they can combine with software rejuvenation mechanisms for even higher availability; (4) and time interval setting is critical to a time-based rejuvenation mechanism. These analytic results provide guidelines for deploying and parameter setting of HA techniques in a virtualized system
Analysis and design of document centric workflows for automating tasks in a multi-tenant cloud archive solution
Information Lifecycle Governance (ILG) is a cross functional business initiative intended to align the cost of information with its value to the enterprise, increase transparency and control and reduce the risk of legal and regulatory obligations for data. It is this dynamic workload system that enables the users to analyze, formalize and optimize for a cloud environment such for being able to provide a fully managed "Archive as a Service" in private and public clouds. In this context of the Master Thesis a research on the possibilities on how to improve and optimize the information lifecycle governance workloads especially in the context of cloud environments. It looks for a formal definition of the individual ILG workflows using Process management concepts with a Process Engine can be used. The main goal is to allow the definition of generic ILG tasks in a declarative way and to guarantee transactional integrity and check-point restarting capabilities. An end user subscribes to SaaS archive service in the cloud has to move data off-premise and delete data management processes to the service provider without comprising data security and privacy. The first scenario is to evaluate on various workload management solution with document centric workflows. The second scenario to investigate describes the use case where a recurring batch load system periodically imports valuable business data in to the SmartCloud Archive. The thesis also proposes the architecture for the required uses to create the batch load and disposal sweep tasks in an enterprise perspective by eliminating administrative client for SmartCloud Content Management System. The architecture proposed moves the data off the premise into a cloud environment and thereafter managed in an automated way. The management of the data had been made to flexible, easy, reliable and efficient
Modeling and Evaluation of Power-Aware Software Rejuvenation in Cloud Systems
Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines (VMs), age. Software rejuvenation is a proactive fault management technique that can prevent the occurrence of future failures by terminating VMMs, cleaning up their internal states, and restarting them. However, the appropriate time and type of VMM rejuvenation can affect performance, availability, and power consumption of a system. In this paper, an analytical model is proposed based on Stochastic Activity Networks for performance evaluation of Infrastructure-as-a-Service cloud systems. Using the proposed model, a two-threshold power-aware software rejuvenation scheme is presented. Many details of real cloud systems, such as VM multiplexing, migration of VMs between VMMs, VM heterogeneity, failure of VMMs, failure of VM migration, and different probabilities for arrival of different VM request types are investigated using the proposed model. The performance of the proposed rejuvenation scheme is compared with two baselines based on diverse performance, availability, and power consumption measures defined on the system