827 research outputs found

    Optimum Allocation of Distributed Service Workflows with Probabilistic Real-Time Guarantees

    Get PDF
    This paper addresses the problem of optimum allocation of distributed real-time workflows with probabilistic service guarantees over a set of physical resources. The discussion focuses on how such a problem may be mathematically formalized, in terms of both constraints and objective function to be optimized, which also accounts for possible business rules for regulating the deployment of the workflows. The presented formal problem constitutes a probabilistic admission control test that may be run by a provider in order to decide whether or not it is worth to admit new workflows into the system and to decide what the optimum allocation of the workflow to the available resources is. Various options are presented, which may be plugged into the formal problem description, depending on the specific needs of individual workflows. The presented problem has been implemented using GAMS and has been tested under various solvers. An illustrative numerical example and an analysis of the results of the implemented model under realistic settings are presented

    Advance Reservations for Distributed Real-Time Workflows with Probabilistic Service Guarantees

    Get PDF
    This paper addresses the problem of optimum allocation of distributed real-time workflows with probabilistic service guarantees over a Grid of physical resources made available by a provider. The discussion focuses on how such a problem may be mathematically formalised, both in terms of constraints and objective function to be optimized, which also accounts for possible business rules for regulating the deployment of the workflows. The presented formal problem constitutes a probabilistic admission control test that may be run by a provider in order to decide whether or not it is worth to admit new workflows into the system, and to decide what the optimum allocation of the workflow to the available resources is. Various options are presented which may be plugged into the formal problem description, depending on the specific needs of individual workflows

    Data centre optimisation enhanced by software defined networking

    Get PDF
    Contemporary Cloud Computing infrastructures are being challenged by an increasing demand for evolved cloud services characterised by heterogeneous performance requirements including real-time, data-intensive and highly dynamic workloads. The classical way to deal with dynamicity is to scale computing and network resources horizontally. However, these techniques must be coupled effectively with advanced routing and switching in a multi-path environment, mixed with a high degree of flexibility to support dynamic adaptation and live-migration of virtual machines (VMs). We propose a management strategy to jointly optimise computing and networking resources in cloud infrastructures, where Software Defined Networking (SDN) plays a key enabling role

    Elastic admission control for federated cloud services

    Get PDF
    This paper presents a technique for admission control of a set of horizontally scalable services, and their optimal placement, into a federated Cloud environment. In the proposed model, the focus is on hosting elastic services whose resource requirements may dynamically grow and shrink, depending on the dynamically varying number of users and patterns of requests. The request may also be partially accommodated in federated external providers, if needed or more convenient. In finding the optimum allocation, the presented mechanism uses a probabilistic optimization model, which takes into account eco-efficiency and cost, as well as affinity and anti-affinity rules possibly in place for the components that comprise the services. In addition to modelling and solving the exact optimization problem, we also introduce a heuristic solver that exhibits a reduced complexity and solving time. We show evaluation results for the proposed technique under various scenarios

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    End-to-end service quality for cloud applications

    Get PDF
    This paper aims to highlight the importance of End-to-End (E2E) service quality for cloud scenarios, with focus on telecom carrier-grade services. In multi-tenant distributed and virtualized cloud infrastructures, enhanced resource sharing raises issues in terms of performance stability and reliability. Moreover, the heterogeneity of business entities responsible for the cloud service delivery, threatens the possibility of offering precise E2E service levels. Setting up proper Service-Level Agreements (SLAs) among the involved players, may become overly challenging. However, problems may be mitigated by a thoughtful intervention of standardization. The paper reviews some of the most important efforts in research and industry to tackle E2E service quality and concludes with some recommendations for additional research and/or standardization effort required to be able to deploy mission critical or interactive real-time services with high demands on service quality, reliability and predictability on cloud platforms. © 2013 Springer International Publishing

    Allocation of Virtual Machines in Cloud Data Centers - A Survey of Problem Models and Optimization Algorithms

    Get PDF
    Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines (VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical resources incurs significant monetary costs and also environmental impact. Therefore, cloud providers must optimize the usage of physical resources by a careful allocation of VMs to hosts, continuously balancing between the conflicting requirements on performance and operational costs. In recent years, several algorithms have been proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable because of subtle differences in the used problem models. This paper surveys the used problem formulations and optimization algorithms, highlighting their strengths and limitations, also pointing out the areas that need further research in the future

    Run-time Support for Real-Time Multimedia in the Cloud

    Get PDF
    REACTION 2013. 2nd International Workshop on Real-time and distributed computing in emerging applications. December 3rd, 2013, Vancouver, Canada.This paper summarizes key research findings in the area of real-time performance and predictabil- ity of multimedia applications in cloud infrastruc- tures, namely: outcomes of the IRMOS European Project, addressing predictability of standard vir- tualized infrastructures; Osprey, an Operating Sys- tem with a novel design suitable for a multitude of heterogeneous workloads including real-time soft- ware; MediaCloud, a novel run-time architecture for offering on-demand multimedia processing facil- ities with unprecedented dynamism and flexibility in resource management. The paper highlights key research challenges ad- dressed by these projects and shortly presents ad- ditional questions lying ahead in this area

    Multi-Objective Scientific-Workflow Scheduling With Data Movement Awareness in Cloud.

    Get PDF
    Due to serving several purposes simultaneously, running scientific workflows on dynamic environments such as cloud computing, has become multi-objective scheduling. Among these purposes, Cost and Makespan are probably the most two primitive objectives. Another critical factor in a large-scale scientific workflow is tremendous amount of data during execution. Therefore, this work also includes Data Movement as an additional objective as it has a major impact on network utilization and energy consumption in network equipment in cloud data center. In considering these three objectives, this work proposes a framework for scheduling solutions which combines a new nodes clustering technique in Directed Acyclic Graph (DAG) model known as Multilevel Dependent Node Clustering (MDNC) and the multiobjective optimization, Extreme Nondominated Sorting Genetic Algorithm-III (E-NSGA-III). E-NSGAIII is the recent extension of Nondominated Sorting Genetic Algorithm (NSGA-III). Five well-known scientific workflows, CyberShake, Epigenomics, LIGO, Montage, and SIPHT are selected as testbeds, while the commonly known Hypervolume is chosen as the performance metric. In this work, MDNC is also experimented with both NSGA-III. Comparison among three approaches, E-NAGA-III alone, E-NAGA-III with Peer-to-Peer clustering and E-NAGA-III with MDNC are carried out. The superiority of the proposed framework among them and its limitation are discussed

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments
    corecore