3,273 research outputs found

    A Hybrid Grey Wolf Optimization and Constriction Factor based PSO Algorithm for Workflow Scheduling in Cloud

    Get PDF
    Due to its flexibility, scalability, and cost-effectiveness of cloud computing, it has emerged as a popular platform for hosting various applications. However, optimizing workflow scheduling in the cloud is still a challenging problem because of the dynamic nature of cloud resources and the diversity of user requirements. In this context, Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO) algorithms have been proposed as effective techniques for improving workflow scheduling in cloud environments. The primary objective of this work is to propose a workflow scheduling algorithm that optimizes the makespan, service cost, and load balance in the cloud. The proposed HGWOCPSO hybrid algorithm employs GWO and Constriction factor based PSO (CPSO) for the workflow optimization. The algorithm is simulated on Workflowsim, where a set of scientific workflows with varying task sizes and inter-task communication requirements are executed on a cloud platform. The simulation results show that the proposed algorithm outperforms existing algorithms in terms of makespan, service cost, and load balance. The employed GWO algorithm mitigates the problem of local optima that is inherent in PSO algorithm

    Performance evaluation of task scheduling using hybrid meta-heuristic in heterogeneous cloud environment

    Get PDF
    Cloud computing is a ubiquitous platform that offers a wide range of online services to clients including but not limited to information and software over the Internet. It is an essential role of cloud computing to enable sharing of resources on-demand over the network including servers, applications, storage, services, and database to the end-users who are remotely connected to the network. Task scheduling is one of the significant function in the cloud computing environment which plays a vital role to sustain the performance of the system and improve its efficiency. Task scheduling is considered as an NP-complete problem in many contexts, however, the heterogeneity of resources in the cloud environment negatively influence on the job scheduling process. Furthermore, on the other side, the heuristic algorithms have satisfying performance but unable to achieve the desired level of efficiency for optimizing the scheduling in a cloud environment. Thus, this paper aims at evaluating the effectiveness of the hybrid meta-heuristic that incorporate genetic algorithm along with DE algorithm (GA-DE) in terms of make-span. In addition, the paper also intends to enhance the performance of the task scheduling in the heterogeneous cloud environment exploiting the scientific workflows (Cybershake, Montage, and Epigenomics). The proposed algorithm (GA-DE) has been compared against three heuristic algorithms, namely: HEFT-Upward Rank, HEFT – Downward Rank, and HEFT – Level Rank. The simulation results prove that the proposed algorithm (GA-DE) outperforms the other existing algorithms in all cases in terms of make-span

    A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing

    Full text link
    Compared to traditional distributed computing environments such as grids, cloud computing provides a more cost-effective way to deploy scientific workflows. Each task of a scientific workflow requires several large datasets that are located in different datacenters from the cloud computing environment, resulting in serious data transmission delays. Edge computing reduces the data transmission delays and supports the fixed storing manner for scientific workflow private datasets, but there is a bottleneck in its storage capacity. It is a challenge to combine the advantages of both edge computing and cloud computing to rationalize the data placement of scientific workflow, and optimize the data transmission time across different datacenters. Traditional data placement strategies maintain load balancing with a given number of datacenters, which results in a large data transmission time. In this study, a self-adaptive discrete particle swarm optimization algorithm with genetic algorithm operators (GA-DPSO) was proposed to optimize the data transmission time when placing data for a scientific workflow. This approach considered the characteristics of data placement combining edge computing and cloud computing. In addition, it considered the impact factors impacting transmission delay, such as the band-width between datacenters, the number of edge datacenters, and the storage capacity of edge datacenters. The crossover operator and mutation operator of the genetic algorithm were adopted to avoid the premature convergence of the traditional particle swarm optimization algorithm, which enhanced the diversity of population evolution and effectively reduced the data transmission time. The experimental results show that the data placement strategy based on GA-DPSO can effectively reduce the data transmission time during workflow execution combining edge computing and cloud computing

    Autonomic Cloud Computing: Open Challenges and Architectural Elements

    Full text link
    As Clouds are complex, large-scale, and heterogeneous distributed systems, management of their resources is a challenging task. They need automated and integrated intelligent strategies for provisioning of resources to offer services that are secure, reliable, and cost-efficient. Hence, effective management of services becomes fundamental in software platforms that constitute the fabric of computing Clouds. In this direction, this paper identifies open issues in autonomic resource provisioning and presents innovative management techniques for supporting SaaS applications hosted on Clouds. We present a conceptual architecture and early results evidencing the benefits of autonomic management of Clouds.Comment: 8 pages, 6 figures, conference keynote pape

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon
    corecore