368 research outputs found

    Partitioning workflow applications over federated clouds to meet non-functional requirements

    Get PDF
    PhD ThesisWith cloud computing, users can acquire computer resources when they need them on a pay-as-you-go business model. Because of this, many applications are now being deployed in the cloud, and there are many di erent cloud providers worldwide. Importantly, all these various infrastructure providers o er services with di erent levels of quality. For example, cloud data centres are governed by the privacy and security policies of the country where the centre is located, while many organisations have created their own internal \private cloud" to meet security needs. With all this varieties and uncertainties, application developers who decide to host their system in the cloud face the issue of which cloud to choose to get the best operational conditions in terms of price, reliability and security. And the decision becomes even more complicated if their application consists of a number of distributed components, each with slightly di erent requirements. Rather than trying to identify the single best cloud for an application, this thesis considers an alternative approach, that is, combining di erent clouds to meet users' non-functional requirements. Cloud federation o ers the ability to distribute a single application across two or more clouds, so that the application can bene t from the advantages of each one of them. The key challenge for this approach is how to nd the distribution (or deployment) of application components, which can yield the greatest bene ts. In this thesis, we tackle this problem and propose a set of algorithms, and a framework, to partition a work ow-based application over federated clouds in order to exploit the strengths of each cloud. The speci c goal is to split a distributed application structured as a work ow such that the security and reliability requirements of each component are met, whilst the overall cost of execution is minimised. To achieve this, we propose and evaluate a cloud broker for partitioning a work ow application over federated clouds. The broker integrates with the e-Science Central cloud platform to automatically deploy a work ow over public and private clouds. We developed a deployment planning algorithm to partition a large work ow appli- - i - cation across federated clouds so as to meet security requirements and minimise the monetary cost. A more generic framework is then proposed to model, quantify and guide the partitioning and deployment of work ows over federated clouds. This framework considers the situation where changes in cloud availability (including cloud failure) arise during work ow execution

    Dynamically Partitioning Workflow over Federated Clouds For Optimising the Monetary Cost and Handling Run-Time Failures

    Get PDF
    Several real-world problems in domain of healthcare, large scale scientific simulations, and manufacturing are organised as workflow applications. Efficiently managing workflow applications on the Cloud computing data-centres is challenging due to the following problems: (i) they need to perform computation over sensitive data (e.g. Healthcare workflows) hence leading to additional security and legal risks especially considering public cloud environments and (ii) the dynamism of the cloud environment can lead to several run-time problems such as data loss and abnormal termination of workflow task due to failures of computing, storage, and network services. To tackle above challenges, this paper proposes a novel workflow management framework call DoFCF (Deploy on Federated Cloud Framework) that can dynamically partition scientific workflows across federated cloud (public/private) data-centres for minimising the financial cost, adhering to security requirements, while gracefully handling run-time failures. The framework is validated in cloud simulation tool (CloudSim) as well as in a realistic workflow-based cloud platform (e-Science Central). The results showed that our approach is practical and is successful in meeting users security requirements and reduces overall cost, and dynamically adapts to the run-time failures

    Quantitative Analysis of Opacity in Cloud Computing Systems

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Federated cloud systems increase the reliability and reduce the cost of the computational support. The resulting combination of secure private clouds and less secure public clouds, together with the fact that resources need to be located within different clouds, strongly affects the information flow security of the entire system. In this paper, the clouds as well as entities of a federated cloud system are assigned security levels, and a probabilistic flow sensitive security model for a federated cloud system is proposed. Then the notion of opacity --- a notion capturing the security of information flow --- of a cloud computing systems is introduced, and different variants of quantitative analysis of opacity are presented. As a result, one can track the information flow in a cloud system, and analyze the impact of different resource allocation strategies by quantifying the corresponding opacity characteristics

    Budget-aware scheduling algorithm for scientific workflow applications across multiple clouds. A Mathematical Optimization-Based Approach

    Get PDF
    Scientific workflows have become a prevailing means of achieving significant scientific advances at an ever-increasing rate. Scheduling mechanisms and approaches are vital to automating these large-scale scientific workflows efficiently. On the other hand, with the advent of cloud computing and its easier availability and lower cost of use, more attention has been paid to the execution and scheduling of scientific workflows in this new paradigm environment. For scheduling large-scale workflows, a multi-cloud environment will typically have a more significant advantage in various computing resources than a single cloud provider. Also, the scheduling makespan and cost can be reduced if the computing resources are used optimally in a multi-cloud environment. Accordingly, this thesis addressed the problem of scientific workflow scheduling in the multi-cloud environment under budget constraints to minimize associated makespan. Furthermore, this study tries to minimize costs, including fees for running VMs and data transfer, minimize the data transfer time, and fulfill budget and resource constraints in the multi-clouds scenario. To this end, we proposed Mixed-Integer Linear Programming (MILP) models that can be solved in a reasonable time by available solvers. We divided the workflow tasks into small segments, distributed them among VMs with multi-vCPU, and formulated them in mathematical programming. In the proposed mathematical model, the objective of a problem and real and physical constraints or restrictions are formulated using exact mathematical functions. We analyzed the treatment of optimal makespan under variations in budget, workflow size, and different segment sizes. The evaluation's results signify that our proposed approach has achieved logical and expected results in meeting the set objectives

    Large-Scale Data Management and Analysis (LSDMA) - Big Data in Science

    Get PDF
    corecore