454 research outputs found

    Mapping Heavy Communication SLA-based Workflows onto Grid Resources with Parallel Processing Technology

    Get PDF
    Service Level Agreements (SLAs) are currently one of the major research topics in Grid Computing. Amongmany system components for supporting of SLA-aware Grid jobs, the SLA mapping module holds an important position and the capability of the mapping module depends on the runtime of the mapping algorithm. With the previously proposed mapping algorithm, the mapping module may develop into the bottleneck of the system if many requests come in during a short period of time. This paper presents a parallel mapping algorithm to map heavy communication SLA-based workflow onto Grid resources which can cope with the problem. Performance measurements thereby deliver evaluation results showing the quality of the method

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Mapping Light Communication SLA-Based Workflows onto Grid Resources with Parallel Processing Technology

    Get PDF
    Service Level Agreements (SLAs) are currently one of the major research topics in Grid Computing. Amongmany system components for supporting of SLA-aware Grid-based workflows, the SLA mapping module receives an important position. Mapping light communication workflows is one main part of the mapping module. With the previously proposed mapping algorithm, the mapping module may become the bottleneck of the system when many requests come in a short period of time. This paper presents a parallel mapping algorithm for light communication SLA-based workflows, which can cope with the problem. Performance measurements deliver evaluation results on the quality of the method

    Concepts and algorithms of mapping Grid-based workflow to resources within an SLA context

    Get PDF
    With the popularity of Grid-based workflow, ensuring the Quality of Service (QoS) for workflow by Service Level Agreements (SLAs) is an emerging trend in the business Grid. Among many system components for supporting SLAaware Grid-based workflow, the SLA mapping mechanism is allotted an important position as it is responsible for assigning sub-jobs of the workflow to Grid resources in a way that meets the user's deadline and minimizes costs. With many different kinds of sub-jobs and resources, the process of mapping a Grid-based workflow within an SLA context defines an unfamiliar and difficult problem. To solve this problem, this chapter describes related concepts and mapping algorithms

    Model-driven Scheduling for Distributed Stream Processing Systems

    Full text link
    Distributed Stream Processing frameworks are being commonly used with the evolution of Internet of Things(IoT). These frameworks are designed to adapt to the dynamic input message rate by scaling in/out.Apache Storm, originally developed by Twitter is a widely used stream processing engine while others includes Flink, Spark streaming. For running the streaming applications successfully there is need to know the optimal resource requirement, as over-estimation of resources adds extra cost.So we need some strategy to come up with the optimal resource requirement for a given streaming application. In this article, we propose a model-driven approach for scheduling streaming applications that effectively utilizes a priori knowledge of the applications to provide predictable scheduling behavior. Specifically, we use application performance models to offer reliable estimates of the resource allocation required. Further, this intuition also drives resource mapping, and helps narrow the estimated and actual dataflow performance and resource utilization. Together, this model-driven scheduling approach gives a predictable application performance and resource utilization behavior for executing a given DSPS application at a target input stream rate on distributed resources.Comment: 54 page

    Joint Elastic Cloud and Virtual Network Framework for Application Performance-cost Optimization

    Get PDF
    International audienceCloud computing infrastructures are providing resources on demand for tackling the needs of large-scale distributed applications. To adapt to the diversity of cloud infras- tructures and usage, new operation tools and models are needed. Estimating the amount of resources consumed by each application in particular is a difficult problem, both for end users who aim at minimizing their costs and infrastructure providers who aim at control- ling their resources allocation. Furthermore, network provision is generally not controlled on clouds. This paper describes a framework automating cloud resources allocation, deploy- ment and application execution control. It is based on a cost estimation model taking into account both virtual network and nodes managed by the cloud. The flexible provisioning of network resources permits the optimization of applications performance and infrastructure cost reduction. Four resource allocation strategies relying on the expertise that can be cap- tured in workflow-based applications are considered. Results of these strategies are confined virtual infrastructure descriptions that are interpreted by the HIPerNet engine responsible for allocating, reserving and configuring physical resources. The evaluation of this framework was carried out on the Aladdin/Grid'5000 testbed using a real application from the area of medical image analysis
    corecore