2,481 research outputs found

    A Novel Task of Loading and Computing Resource Scheduling Strategy in Internet of Vehicles Based on Dynamic Greedy Algorithm

    Get PDF
    Focus on the scheduling problem of distributed computing tasks in Internet of Vehicles. Firstly, based on the computing-aware network theory, a distributed computing resource model of the Internet of Vehicles is established, and the seven-dimensional QoS attributes of the computing resources in the Internet of Vehicles (reliability between computing resources, communication costs, computing speed and computing costs of the computing resources themselves , computing energy consumption, computing stability, and computing success rate) are grouped and transformed into two-dimensional comprehensive attribute priorities: computing performance priority and communication performance priority. Secondly, the weighted directed acyclic graph model of distributed computing tasks in the Internet of Vehicles and the seven-dimensional QoS attribute weighted undirected topology graph model of distributed computing resources in the Internet of Vehicles are respectively established. Moreover, a dynamic greedy algorithm-based task of loading and computing resource scheduling algorithm is proposed. Finally, the example analysis shows that the overall performance of this dynamic greedy algorithm-based task of loading and computing resource scheduling algorithm is better than the classic HEFT scheduling algorithm and round robin scheduling algorithm

    Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks

    Get PDF
    An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive h -mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks

    Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System

    Get PDF
    The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users' traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users' Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system

    A latency-aware max-min algorithm for resource allocation in cloud

    Get PDF
    Cloud computing is an emerging distributed computing paradigm. However, it requires certain initiatives that need to be tailored for the cloud environment such as the provision of an on-the-fly mechanism for providing resource availability based on the rapidly changing demands of the customers. Although, resource allocation is an important problem and has been widely studied, there are certain criteria that need to be considered. These criteria include meeting user’s quality of service (QoS) requirements. High QoS can be guaranteed only if resources are allocated in an optimal manner. This paper proposes a latency-aware max-min algorithm (LAM) for allocation of resources in cloud infrastructures. The proposed algorithm was designed to address challenges associated with resource allocation such as variations in user demands and on-demand access to unlimited resources. It is capable of allocating resources in a cloud-based environment with the target of enhancing infrastructure-level performance and maximization of profits with the optimum allocation of resources. A priority value is also associated with each user, which is calculated by analytic hierarchy process (AHP). The results validate the superiority for LAM due to better performance in comparison to other state-of-the-art algorithms with flexibility in resource allocation for fluctuating resource demand patterns

    MIN-COST WITH DELAY SCHEDULING FOR LARGE SCALE CLOUD-BASED WORKFLOW APPLICATIONS PLATFORM

    Get PDF
    Cloud computing is a promising solution to provide the resource scalability dynamically. In order to support large scale workflow applications, we present Nuts-LSWAP which is implementation for Cloud workflow. Then, a novel Min-cost with delay scheduling algorithm is presented in this paper. We also focuses on the global scheduling including genetic evolution method and other scheduling methods (sequence and greedy) to evaluate and decrease the execution cost. Finally, three primary experiments divided into two parts. One parts of experiment demonstrate the global mapping algorithm effectively, and the second parts compare execution of a large scale workflow instances with or without delay scheduling. It is primarily proved the Nuts-LSWAP is efficient platform for building Cloud workflow environment

    A Systematic Literature Review on Task Allocation and Performance Management Techniques in Cloud Data Center

    Full text link
    As cloud computing usage grows, cloud data centers play an increasingly important role. To maximize resource utilization, ensure service quality, and enhance system performance, it is crucial to allocate tasks and manage performance effectively. The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers. The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies, categories, and gaps. A literature review was conducted, which included the analysis of 463 task allocations and 480 performance management papers. The review revealed three task allocation research topics and seven performance management methods. Task allocation research areas are resource allocation, load-Balancing, and scheduling. Performance management includes monitoring and control, power and energy management, resource utilization optimization, quality of service management, fault management, virtual machine management, and network management. The study proposes new techniques to enhance cloud computing work allocation and performance management. Short-comings in each approach can guide future research. The research's findings on cloud data center task allocation and performance management can assist academics, practitioners, and cloud service providers in optimizing their systems for dependability, cost-effectiveness, and scalability. Innovative methodologies can steer future research to fill gaps in the literature
    corecore