11 research outputs found

    A Novel approach for Load Balancing distribution and storage by using Cloud Computing

    Get PDF
    Today’s Cloud computing is the popular technology to complete the work in an efficient way, this provides an important platform to storing data in terms of pay as per use and accessible for everyone by the help of the internet. As we know the use of this technology invites many issues, some are security, failure rate and most critical load balancing. These research papers focus on load balancing and also provide an algorithm to overcome this issue. This paper proposed an idea to minimize the extra burden on the nodes through load shift or load transfer according to the availability or requirement while nodes are overloaded. Here we are discussing many algorithms and their features and find out the proposed joint algorithm which works accordingly as demanded and also offer an idea to overcome the problem of over burdens on nodes by load balancing

    Software-Defined Cloud Computing: Architectural Elements and Open Challenges

    Full text link
    The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing, Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi, Indi

    Workflow Scheduling Techniques and Algorithms in IaaS Cloud: A Survey

    Get PDF
    In the modern era, workflows are adopted as a powerful and attractive paradigm for expressing/solving a variety of applications like scientific, data intensive computing, and big data applications such as MapReduce and Hadoop. These complex applications are described using high-level representations in workflow methods. With the emerging model of cloud computing technology, scheduling in the cloud becomes the important research topic. Consequently, workflow scheduling problem has been studied extensively over the past few years, from homogeneous clusters, grids to the most recent paradigm, cloud computing. The challenges that need to be addressed lies in task-resource mapping, QoS requirements, resource provisioning, performance fluctuation, failure handling, resource scheduling, and data storage. This work focuses on the complete study of the resource provisioning and scheduling algorithms in cloud environment focusing on Infrastructure as a service (IaaS). We provided a comprehensive understanding of existing scheduling techniques and provided an insight into research challenges that will be a possible future direction to the researchers

    Real-time and dynamic fault-tolerant scheduling for scientific workflows in clouds

    Get PDF
    Cloud computing has become a popular technology for executing scientific workflows. However, with a large number of hosts and virtual machines (VMs) being deployed, the cloud resource failures, such as the permanent failure of hosts (HPF), the transient failure of hosts (HTF), and the transient failure of VMs (VMTF), bring the service reliability problem. Therefore, fault tolerance for time-consuming scientific workflows is highly essential in the cloud. However, existing fault-tolerant (FT) approaches consider only one or two above failure types and easily neglect the others, especially for the HTF. This paper proposes a Real-time and dynamic Fault-tolerant Scheduling (ReadyFS) algorithm for scientific workflow execution in a cloud, which guarantees deadline constraints and improves resource utilization even in the presence of any resource failure. Specifically, we first introduce two FT mechanisms, i.e., the replication with delay execution (RDE) and the checkpointing with delay execution (CDE), to cope with HPF and VMTF, simultaneously. Additionally, the rescheduling (ReSC) is devised to tackle the HTF that affects the resource availability of the entire cloud datacenter. Then, the resource adjustment (RA) strategy, including the resource scaling-up (RS-Up) and the resource scaling-down (RS-Down), is used to adjust resource demands and improve resource utilization dynamically. Finally, the ReadyFS algorithm is presented to schedule real-time scientific workflows by combining all the above FT mechanisms with RA strategy. We conduct the performance evaluation with real-world scientific workflows and compare ReadyFS with five vertical comparison algorithms and three horizontal comparison algorithms. Simulation results confirm that ReadyFS is indeed able to guarantee the fault tolerance of scientific workflow execution and improve cloud resource utilization

    Resource-Constrained Scheduling of Stochastic Tasks With Unknown Probability Distribution

    Get PDF
    This work introduces scheduling strategies to maximize the expected numberof independent tasks that can be executed on a cloud platform within a given budgetand under a deadline constraint. Task execution times are not known before execution;instead, the only information available to the scheduler is that they obey some (unknown)probability distribution. The scheduler needs to acquire some information before decidingfor a cutting threshold: instead of allowing all tasks to run until completion, one maywant to interrupt long-running tasks at some point. In addition, the cutting thresholdmay be reevaluated as new information is acquired when the execution progresses further.This works presents several strategies to determine a good cutting threshold, and to decidewhen to re-evaluate it. In particular, we use the Kaplan-Meier estimator to account fortasks that are still running when making a decision. The efficiency of our strategies isassessed through an extensive set of simulations with various budget and deadline values,and ranging over 14 probability distributions.Ce travail présente des stratégies d’ordonnancement permettant de maximiser le nombre attendu de tâches indépendantes pouvant être exécutées sur une plateforme de type cloud avec un budget donné et une contrainte de date limite. Le temps d’exécution des tâches est inconnu, on sait seulement qu’ils obéissent à une distribution de probabilité (inconnue). L’ordonnanceur peut décider à tout moment d’interrompre l’exécution d’une tâche (longue) en cours d’exécution et d’en lancer une nouvelle, mais le budget déjà utilisé pour la tâche interrompue est perdu. Le seuil d’interruption d’une tâche peut être recalculé au fur et à mesure que l’exécution progresse globalement. Ce travail présente plusieurs stratégies pour déterminer un bon seuil d’interruption, et pour décider quand le ré-évaluer. Nous utilisons l’estimateur de Kaplan-Meier pour prendre en compte les tâches en cours d’exécution au moment où la décision est prise. L’efficacité de nos stratégies est évaluée via un vaste ensemble de simulations, avec diverses valeurs de budget et de date limite, et portant sur 14 distributions de probabilit

    Meeting deadlines of scientific workflows in public clouds with tasks replication

    No full text
    The elasticity of Cloud infrastructures makes them a suitable platform for execution of deadline-constrained workflow applications, because resources available to the application can be dynamically increased to enable application speedup. Existing research in execution of scientific workflows in Clouds either try to minimize the workflow execution time ignoring deadlines and budgets or focus on the minimization of cost while trying to meet the application deadline. However, they implement limited contingency strategies to correct delays caused by underestimation of tasks execution time or fluctuations in the delivered performance of leased public Cloud resources. In order to mitigate effects of performance variation of resources on soft deadlines of workflow applications, we propose an algorithm that uses idle time of provisioned resources and budget surplus to replicate tasks. Simulation experiments with four well-known scientific workflows show that the proposed algorithm increases the likelihood of deadlines being met and reduces the total execution time of applications as the budget available for replication increases

    Meeting Deadlines of Scientific Workflows in Public Clouds with Tasks Replication

    No full text

    Partitioning workflow applications over federated clouds to meet non-functional requirements

    Get PDF
    PhD ThesisWith cloud computing, users can acquire computer resources when they need them on a pay-as-you-go business model. Because of this, many applications are now being deployed in the cloud, and there are many di erent cloud providers worldwide. Importantly, all these various infrastructure providers o er services with di erent levels of quality. For example, cloud data centres are governed by the privacy and security policies of the country where the centre is located, while many organisations have created their own internal \private cloud" to meet security needs. With all this varieties and uncertainties, application developers who decide to host their system in the cloud face the issue of which cloud to choose to get the best operational conditions in terms of price, reliability and security. And the decision becomes even more complicated if their application consists of a number of distributed components, each with slightly di erent requirements. Rather than trying to identify the single best cloud for an application, this thesis considers an alternative approach, that is, combining di erent clouds to meet users' non-functional requirements. Cloud federation o ers the ability to distribute a single application across two or more clouds, so that the application can bene t from the advantages of each one of them. The key challenge for this approach is how to nd the distribution (or deployment) of application components, which can yield the greatest bene ts. In this thesis, we tackle this problem and propose a set of algorithms, and a framework, to partition a work ow-based application over federated clouds in order to exploit the strengths of each cloud. The speci c goal is to split a distributed application structured as a work ow such that the security and reliability requirements of each component are met, whilst the overall cost of execution is minimised. To achieve this, we propose and evaluate a cloud broker for partitioning a work ow application over federated clouds. The broker integrates with the e-Science Central cloud platform to automatically deploy a work ow over public and private clouds. We developed a deployment planning algorithm to partition a large work ow appli- - i - cation across federated clouds so as to meet security requirements and minimise the monetary cost. A more generic framework is then proposed to model, quantify and guide the partitioning and deployment of work ows over federated clouds. This framework considers the situation where changes in cloud availability (including cloud failure) arise during work ow execution
    corecore