78 research outputs found

    Hybrid heuristic algorithm for better energy optimization and resource utilization in cloud computing

    Get PDF
    Energy-efficient execution of the scientific workflow is a challenging task in cloud computing that demands high-performance computing to process growing datasets. Due to the interdependency of tasks in the scientific workflow applications, energy-efficient resource allocation is vital for large-scale applications running on heterogeneous physical machines. Thus, this paper proposes a Hybrid Heuristic algorithm based Energy-efficient cloud Computing service (HH-ECO) that offers a significant solution for resource allocation, task scheduling, and optimization of scientific workflows. To ensure the energy-efficient execution, the HH-ECO focuses on executing non-dominant workflow tasks through adaptive mutation and energy-aware migration strategy. HH-ECO adopts the Chaotic based Particle Swarm Optimization (C-PSO) principle to optimize the resource allocation, task scheduling, and resource migration by generating the global best plans without local convergence. C-PSO with adaptive mutation avoids the deterioration of global optima while finding the best host to place the virtual machine and ensures an appropriate resource allocation plan. By considering the workflow task precedence relationships during C-PSO based task scheduling, the novel hybrid heuristic method efficiently solves the multi-objective combinatorial optimization problem without dominance among the workflow tasks. The Cloudsim based simulation study delivers superior results compared to the existing methods such as the Hybrid Heuristic Workflow Scheduling algorithm (HHWS) and Distributed Dynamic VM Management (DDVM). The proposed approach significantly improves the optimal makespan to 38.27% and energy conservation to 38.06% compared to the existing methods

    Employing the Powered Hybridized Darts Game with BWO Optimization for Effective Job Scheduling and Distributing Load in the Cloud-Based Environment

    Get PDF
    One of the most frequent issues in cloud computing systems is job scheduling, which is designed to efficiently reduce installation time and cost while concurrently enhancing resource utilisation. Limitations such as accessible implementation costs, high resource utilisation, insufficient make-span, and fast scheduling response lead to the Nondeterministic Polynomial (NP)-hard optimisation problem. As the number of combinations along with processing power increases, job allocation becomes NP-hard. This study employs a hybrid heuristic optimisation technique that incorporates load balancing to achieve optimal job scheduling and boost service provider performance within the cloud architecture. As a result, there are many less problems with the scheduling process. The suggested work scheduling approach successfully resolves the load balancing issue. The suggested Hybridised Darts Game-Based Beluga Whale Optimisation Algorithm (HDG-BWOA) assists in assigning jobs to the machines according to workload. When assigning jobs to virtual machines, factors such as reduced energy usage, minimised mean reaction time, enhanced job assurance ratio, and higher Cloud Data Centre (CDC) resource consumption are taken into account. By ensuring flexibility among virtual computers, this job scheduling strategy keeps them from overloading or underloading. Additionally, by employing this method, more activities are effectively finished before the deadline. The effectiveness of the proposed configuration is guaranteed using traditional heuristic-based job scheduling techniques in compliance with multiple assessment metrics

    Energy and performance-optimized scheduling of tasks in distributed cloud and edge computing systems

    Get PDF
    Infrastructure resources in distributed cloud data centers (CDCs) are shared by heterogeneous applications in a high-performance and cost-effective way. Edge computing has emerged as a new paradigm to provide access to computing capacities in end devices. Yet it suffers from such problems as load imbalance, long scheduling time, and limited power of its edge nodes. Therefore, intelligent task scheduling in CDCs and edge nodes is critically important to construct energy-efficient cloud and edge computing systems. Current approaches cannot smartly minimize the total cost of CDCs, maximize their profit and improve quality of service (QoS) of tasks because of aperiodic arrival and heterogeneity of tasks. This dissertation proposes a class of energy and performance-optimized scheduling algorithms built on top of several intelligent optimization algorithms. This dissertation includes two parts, including background work, i.e., Chapters 3–6, and new contributions, i.e., Chapters 7–11. 1) Background work of this dissertation. Chapter 3 proposes a spatial task scheduling and resource optimization method to minimize the total cost of CDCs where bandwidth prices of Internet service providers, power grid prices, and renewable energy all vary with locations. Chapter 4 presents a geography-aware task scheduling approach by considering spatial variations in CDCs to maximize the profit of their providers by intelligently scheduling tasks. Chapter 5 presents a spatio-temporal task scheduling algorithm to minimize energy cost by scheduling heterogeneous tasks among CDCs while meeting their delay constraints. Chapter 6 gives a temporal scheduling algorithm considering temporal variations of revenue, electricity prices, green energy and prices of public clouds. 2) Contributions of this dissertation. Chapter 7 proposes a multi-objective optimization method for CDCs to maximize their profit, and minimize the average loss possibility of tasks by determining task allocation among Internet service providers, and task service rates of each CDC. A simulated annealing-based bi-objective differential evolution algorithm is proposed to obtain an approximate Pareto optimal set. A knee solution is selected to schedule tasks in a high-profit and high-quality-of-service way. Chapter 8 formulates a bi-objective constrained optimization problem, and designs a novel optimization method to cope with energy cost reduction and QoS improvement. It jointly minimizes both energy cost of CDCs, and average response time of all tasks by intelligently allocating tasks among CDCs and changing task service rate of each CDC. Chapter 9 formulates a constrained bi-objective optimization problem for joint optimization of revenue and energy cost of CDCs. It is solved with an improved multi-objective evolutionary algorithm based on decomposition. It determines a high-quality trade-off between revenue maximization and energy cost minimization by considering CDCs’ spatial differences in energy cost while meeting tasks’ delay constraints. Chapter 10 proposes a simulated annealing-based bees algorithm to find a close-to-optimal solution. Then, a fine-grained spatial task scheduling algorithm is designed to minimize energy cost of CDCs by allocating tasks among multiple green clouds, and specifies running speeds of their servers. Chapter 11 proposes a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are met in cloud-edge computing systems. A single-objective constrained optimization problem is solved by a proposed simulated annealing-based migrating birds optimization. This dissertation evaluates these algorithms, models and software with real-life data and proves that they improve scheduling precision and cost-effectiveness of distributed cloud and edge computing systems

    Provendo robustez a escalonadores de workflows sensíveis às incertezas da largura de banda disponível

    Get PDF
    Orientadores: Edmundo Roberto Mauro Madeira, Luiz Fernando BittencourtTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Para que escalonadores de aplicações científicas modeladas como workflows derivem escalonamentos eficientes em nuvens híbridas, é necessário que se forneçam, além da descrição da demanda computacional desses aplicativos, as informações sobre o poder de computação dos recursos disponíveis, especialmente aqueles dados relacionados com a largura de banda disponível. Entretanto, a imprecisão das ferramentas de medição fazem com que as informações da largura de banda disponível fornecida aos escalonadores difiram dos valores reais que deveriam ser considerados para se obter escalonamentos quase ótimos. Escalonadores especialmente projetados para nuvens híbridas simplesmente ignoram a existência de tais imprecisões e terminam produzindo escalonamentos enganosos e de baixo desempenho, o que os tornam sensíveis às informações incertas. A presente Tese introduz um procedimento pró-ativo para fornecer um certo nível de robustez a escalonamentos derivados de escalonadores não projetados para serem robustos frente às incertezas decorrentes do uso de informações imprecisas dadas por ferramentas de medições de rede. Para tornar os escalonamentos sensíveis às incertezas em escalonamentos robustos às essas imprecisões, o procedimento propõe um refinamento (uma deflação) das estimativas da largura de banda antes de serem utilizadas pelo escalonador não robusto. Ao propor o uso de estimativas refinadas da largura de banda disponível, escalonadores inicialmente sensíveis às incertezas passaram a produzir escalonamentos com um certo nível de robustez às essas imprecisões. A eficácia e a eficiência do procedimento proposto são avaliadas através de simulação. Comparam-se, portanto, os escalonamentos gerados por escalonadores que passaram a usar o procedimento proposto com aqueles produzidos pelos mesmos escalonadores mas sem aplicar esse procedimento. Os resultados das simulações mostram que o procedimento proposto é capaz de prover robustez às incertezas da informação da largura de banda a escalonamentos derivados de escalonardes não robustos às tais incertezas. Adicionalmente, esta Tese também propõe um escalonador de aplicações científicas especialmente compostas por um conjunto de workflows. A novidade desse escalonador é que ele é flexível, ou seja, permite o uso de diferentes categorias de funções objetivos. Embora a flexibilidade proposta seja uma novidade no estado da arte, esse escalonador também é sensível às imprecisões da largura de banda. Entretanto, o procedimento mostrou-se capaz de provê-lo de robustez frente às tais incertezas. É mostrado nesta Tese que o procedimento proposto aumentou a eficácia e a eficiência de escalonadores de workflows não robustos projetados para nuvens híbridas, já que eles passaram a produzir escalonamentos com um certo nível de robustez na presença de estimativas incertas da largura de banda disponível. Dessa forma, o procedimento proposto nesta Tese é uma importante ferramenta para aprimorar os escalonadores sensíveis às estimativas incertas da banda disponível especialmente projetados para um ambiente computacional onde esses valores são imprecisos por natureza. Portanto, esta Tese propõe um procedimento que promove melhorias nas execuções de aplicações científicas em nuvens híbridasAbstract: To derive efficient schedules for the tasks of scientific applications modelled as workflows, schedulers need information on the application demands as well as on the resource availability, especially those regarding the available bandwidth. However, the lack of precision of bandwidth estimates provided by monitoring/measurement tools should be considered by the scheduler to achieve near-optimal schedules. Uncertainties of available bandwidth can be a result of imprecise measurement and monitoring network tools and/or their incapacity of estimating in advance the real value of the available bandwidth expected for the application during the scheduling step of the application. Schedulers specially designed for hybrid clouds simply ignore the inaccuracies of the given estimates and end up producing non-robust, low-performance schedules, which makes them sensitive to the uncertainties stemming from using these networking tools. This thesis introduces a proactive procedure to provide a certain level of robustness for schedules derived from schedulers that were not designed to be robust in the face of uncertainties of bandwidth estimates stemming from using unreliable networking tools. To make non-robust schedulers into robust schedulers, the procedure applies a deflation on imprecise bandwidth estimates before being used as input to non-robust schedulers. By proposing the use of refined (deflated) estimates of the available bandwidth, non-robust schedulers initially sensitive to these uncertainties started to produce robust schedules that are insensitive to these inaccuracies. The effectiveness and efficiency of the procedure in providing robustness to non-robust schedulers are evaluated through simulation. Schedules generated by induced-robustness schedulers through the use of the procedure is compared to that of produced by sensitive schedulers. In addition, this thesis also introduces a flexible scheduler for a special case of scientific applications modelled as a set of workflows grouped into ensembles. Although the novelty of this scheduler is the replacement of objective functions according to the user's needs, it is still a non-robust scheduler. However, the procedure was able to provide the necessary robustness for this flexible scheduler be able to produce robust schedules under uncertain bandwidth estimates. It is shown in this thesis that the proposed procedure enhanced the robustness of workflow schedulers designed especially for hybrid clouds as they started to produce robust schedules in the presence of uncertainties stemming from using networking tools. The proposed procedure is an important tool to furnish robustness to non-robust schedulers that are originally designed to work in a computational environment where bandwidth estimates are very likely to vary and cannot be estimated precisely in advance, bringing, therefore, improvements to the executions of scientific applications in hybrid cloudsDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2012/02778-6FAPES

    Performance optimization of big data computing workflows for batch and stream data processing in multi-clouds

    Get PDF
    Workflow techniques have been widely used as a major computing solution in many science domains. With the rapid deployment of cloud infrastructures around the globe and the economic benefits of cloud-based computing and storage services, an increasing number of scientific workflows have migrated or are in active transition to clouds. As the scale of scientific applications continues to grow, it is now common to deploy various data- and network-intensive computing workflows such as serial computing workflows, MapReduce/Spark-based workflows, and Storm-based stream data processing workflows in multi-cloud environments, where inter-cloud data transfer oftentimes plays a significant role in both workflow performance and financial cost. Rigorous mathematical models are constructed to analyze the intra- and inter-cloud execution process of scientific workflows and a class of budget-constrained workflow mapping problems are formulated to optimize the network performance of big data workflows in multi-cloud environments. Research shows that these problems are all NP-complete and a heuristic solution is designed for each that takes into consideration module execution, data transfer, and I/O operations. The performance superiority of the proposed solutions over existing methods are illustrated through extensive simulations and further verified by real-life workflow experiments deployed in public clouds

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    A survey on intelligent computation offloading and pricing strategy in UAV-Enabled MEC network: Challenges and research directions

    Get PDF
    The lack of resource constraints for edge servers makes it difficult to simultaneously perform a large number of Mobile Devices’ (MDs) requests. The Mobile Network Operator (MNO) must then select how to delegate MD queries to its Mobile Edge Computing (MEC) server in order to maximize the overall benefit of admitted requests with varying latency needs. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligent (AI) can increase MNO performance because of their flexibility in deployment, high mobility of UAV, and efficiency of AI algorithms. There is a trade-off between the cost incurred by the MD and the profit received by the MNO. Intelligent computing offloading to UAV-enabled MEC, on the other hand, is a promising way to bridge the gap between MDs' limited processing resources, as well as the intelligent algorithms that are utilized for computation offloading in the UAV-MEC network and the high computing demands of upcoming applications. This study looks at some of the research on the benefits of computation offloading process in the UAV-MEC network, as well as the intelligent models that are utilized for computation offloading in the UAV-MEC network. In addition, this article examines several intelligent pricing techniques in different structures in the UAV-MEC network. Finally, this work highlights some important open research issues and future research directions of Artificial Intelligent (AI) in computation offloading and applying intelligent pricing strategies in the UAV-MEC network

    Research allocation in mobile volunteer computing system: Taxonomy, challenges and future work

    Get PDF
    The rise of mobile devices and the Internet of Things has generated vast data which require efficient processing methods. Volunteer Computing (VC) is a distributed network that utilises idle resources from diverse devices for task completion. VC offers a cost-effective and scalable solution for computation resources. Mobile Volunteer Computing (MVC) capitalises on the abundance of mobile devices as participants. However, managing a large number of participants in the network presents a challenge in scheduling resources. Various resource allocation algorithms and MVC platforms have been developed, but there is a lack of survey papers summarising these systems and algorithms. This paper aims to bridge the gap by delivering a comprehensive survey of MVC, including related technologies, MVC architecture, and major finding in taxonomy of resource allocation in MVC
    corecore