3 research outputs found

    Optimized Deep Learning Schemes for Secured Resource Allocation and Task Scheduling in Cloud Computing - A Survey

    Get PDF
    Scheduling involves allocating shared resources gradually so that tasks can be completed within a predetermined time frame. In Task Scheduling (TS) and Resource Allocation (RA), the phrase is used independently for tasks and resources. Scheduling is widely used for Cloud Computing (CC), computer science, and operational management. Effective scheduling ensures that systems operate efficiently, decisions are made effectively, resources are used efficiently, costs are kept to a minimum, and productivity is increased. High energy consumption, lower CPU utilization, time consumption, and low robustness are the most frequent problems in TS and RA in CC. In this survey, RA and TS based on deep learning (DL) and machine learning (ML) were discussed. Additionally, look into the methods employed by DL-based RA and TS-based CC. Additionally, the benefits, drawbacks, advantages, disadvantages, and merits are explored. The work's primary contribution is an analysis and assessment of DL-based RA and TS methodologies that pinpoint problems with cloud computing

    Hybrid load balance based on genetic algorithm in cloud environment

    Get PDF
    Load balancing is an efficient mechanism to distribute loads over cloud resources in a way that maximizes resource utilization and minimizes response time. Metaheuristic techniques are powerful techniques for solving the load balancing problems. However, these techniques suffer from efficiency degradation in large scale problems. This paper proposes three main contributions to solve this load balancing problem. First, it proposes a heterogeneous initialized load balancing (HILB) algorithm to perform a good task scheduling process that improves the makespan in the case of homogeneous or heterogeneous resources and provides a direction to reach optimal load deviation. Second, it proposes a hybrid load balance based on genetic algorithm (HLBGA) as a combination of HILB and genetic algorithm (GA). Third, a newly formulated fitness function that minimizes the load deviation is used for GA. The simulation of the proposed algorithm is implemented in the cases of homogeneous and heterogeneous resources in cloud resources. The simulation results show that the proposed hybrid algorithm outperforms other competitor algorithms in terms of makespan, resource utilization, and load deviation

    A Systematic Literature Review on Task Allocation and Performance Management Techniques in Cloud Data Center

    Full text link
    As cloud computing usage grows, cloud data centers play an increasingly important role. To maximize resource utilization, ensure service quality, and enhance system performance, it is crucial to allocate tasks and manage performance effectively. The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers. The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies, categories, and gaps. A literature review was conducted, which included the analysis of 463 task allocations and 480 performance management papers. The review revealed three task allocation research topics and seven performance management methods. Task allocation research areas are resource allocation, load-Balancing, and scheduling. Performance management includes monitoring and control, power and energy management, resource utilization optimization, quality of service management, fault management, virtual machine management, and network management. The study proposes new techniques to enhance cloud computing work allocation and performance management. Short-comings in each approach can guide future research. The research's findings on cloud data center task allocation and performance management can assist academics, practitioners, and cloud service providers in optimizing their systems for dependability, cost-effectiveness, and scalability. Innovative methodologies can steer future research to fill gaps in the literature
    corecore