2,869 research outputs found

    Learning-Based Resource Allocation in Cloud Data Center Using Advantage Actor-Critic

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Due to the ever-changing system states and various user demands, resource allocation in cloud data center is faced with great challenges in dynamics and complexity. Although there are solutions that focus on this problem, they cannot effectively respond to the dynamic changes of system states and user demands since they depend on the prior knowledge of the system. Therefore, it is still an open challenge to realize automatic and adaptive resource allocation in order to satisfy diverse system requirements in cloud data center. To cope with this challenge, we propose an advantage actor-critic based reinforcement learning (RL) framework for resource allocation in cloud data center. First, the actor parameterizes the policy (allocating resources) and chooses continuous actions (scheduling jobs) based on the scores (evaluating actions) from the critic. Next, the policy is updated by gradient ascent and the variance of policy gradient can be significantly reduced with the advantage function. Simulations using Google cluster-usage traces show the effectiveness of the proposed method in cloud resource allocation. Moreover, the proposed method outperforms classic resource allocation algorithms in terms of job latency and achieves faster convergence speed than the traditional policy gradient method

    Adaptive Resource Allocation in Cloud Data Centers using Actor-Critical Deep Reinforcement Learning for Optimized Load Balancing

    Get PDF
    This paper proposes a deep reinforcement learning-based actor-critic method for efficient resource allocation in cloud computing. The proposed method uses an actor network to generate the allocation strategy and a critic network to evaluate the quality of the allocation. The actor and critic networks are trained using a deep reinforcement learning algorithm to optimize the allocation strategy. The proposed method is evaluated using a simulation-based experimental study, and the results show that it outperforms several existing allocation methods in terms of resource utilization, energy efficiency and overall cost. Some algorithms for managing workloads or virtual machines have been developed in previous works in an effort to reduce energy consumption; however, these solutions often fail to take into account the high dynamic nature of server states and are not implemented at a sufficiently enough scale. In order to guarantee the QoS of workloads while simultaneously lowering the computational energy consumption of physical servers, this study proposes the Actor Critic based Compute-Intensive Workload Allocation Scheme (AC-CIWAS). AC-CIWAS captures the dynamic feature of server states in a continuous manner, and considers the influence of different workloads on energy consumption, to accomplish logical task allocation. In order to determine how best to allocate workloads in terms of energy efficiency, AC-CIWAS uses a Deep Reinforcement Learning (DRL)-based Actor Critic (AC) algorithm to calculate the projected cumulative return over time. Through simulation, we see that the proposed AC-CIWAS can reduce the workload of the job scheduled with QoS assurance by around 20% decrease compared to existing baseline allocation methods. The report also covers the ways in which the proposed technology could be used in cloud computing and offers suggestions for future study

    A Deep Reinforcement Learning-Based Model for Optimal Resource Allocation and Task Scheduling in Cloud Computing

    Get PDF
    The advent of cloud computing has dramatically altered how information is stored and retrieved. However, the effectiveness and speed of cloud-based applications can be significantly impacted by inefficiencies in the distribution of resources and task scheduling. Such issues have been challenging, but machine and deep learning methods have shown great potential in recent years. This paper suggests a new technique called Deep Q-Networks and Actor-Critic (DQNAC) models that enhance cloud computing efficiency by optimizing resource allocation and task scheduling. We evaluate our approach using a dataset of real-world cloud workload traces and demonstrate that it can significantly improve resource utilization and overall performance compared to traditional approaches. Furthermore, our findings indicate that deep reinforcement learning (DRL)-based methods can be potent and effective for optimizing cloud computing, leading to improved cloud-based application efficiency and flexibility
    • …
    corecore