8 research outputs found

    Computation offloading and resource allocation in vehicular networks based on dual-side cost minimization

    Get PDF
    The proliferation of smart vehicular terminals (VTs) and their resource hungry applications imposes serious challenges to the processing capabilities of VTs and the delivery of vehicular services. Mobile Edge Computing (MEC) offers a promising paradigm to solve this problem by offloading VT applications to proximal MEC servers, while TV white space (TVWS) bands can be used to supplement the bandwidth for computation offloading. In this paper, we consider a cognitive vehicular network (CVN) that uses the TVWS band, and formulate a dual-side optimization problem, to minimize the cost of VTs and that of the MEC server at the same time. Specifically, the dual-side cost minimization is achieved by jointly optimizing the offloading decision and local CPU frequency on the VT side, and the radio resource allocation and server provisioning on the server side, while guaranteeing network stability. Based on Lyapunov optimization, we design an algorithm called DDORV to tackle the joint optimization problem, where only current system states, such as channel states and traffic arrivals, are needed. The closed-form solution to the VT-side problem is obtained easily by derivation and comparing two values. For MEC server side optimization, we first obtain server provisioning independently, and then devise a continuous relaxation and Lagrangian dual decomposition based iterative algorithm for joint radio resource and power allocation. Simulation results demonstrate that DDORV converges fast, can balance the cost-delay tradeoff flexibly, and can obtain more performance gains in cost reduction and as compared with existing schemes

    A Deep Reinforcement Learning-Based Model for Optimal Resource Allocation and Task Scheduling in Cloud Computing

    Get PDF
    The advent of cloud computing has dramatically altered how information is stored and retrieved. However, the effectiveness and speed of cloud-based applications can be significantly impacted by inefficiencies in the distribution of resources and task scheduling. Such issues have been challenging, but machine and deep learning methods have shown great potential in recent years. This paper suggests a new technique called Deep Q-Networks and Actor-Critic (DQNAC) models that enhance cloud computing efficiency by optimizing resource allocation and task scheduling. We evaluate our approach using a dataset of real-world cloud workload traces and demonstrate that it can significantly improve resource utilization and overall performance compared to traditional approaches. Furthermore, our findings indicate that deep reinforcement learning (DRL)-based methods can be potent and effective for optimizing cloud computing, leading to improved cloud-based application efficiency and flexibility

    Dynamic NOMA-Based Computation Offloading in Vehicular Platoons

    Full text link
    Both the mobile edge computing (MEC) based and fog computing (FC) aided Internet of Vehicles (IoV) constitute promising paradigms of meeting the demands of low-latency pervasive computing. To this end, we construct a dynamic NOMA-based computation offloading scheme for vehicular platoons on highways, where the vehicles can offload their computing tasks to other platoon members. To cope with the rapidly fluctuating channel quality, we divide the timeline into successive time slots according to the channel's coherence time. Robust computing and offloading decisions are made for each time slot after taking the channel estimation errors into account. Considering a certain time slot, we first analytically characterize both the locally computed source data and the offloaded source data as well as the energy consumption of every vehicle in the platoons. We then formulate the problem of minimizing the long-term energy consumption by optimizing the allocation of both the communication and computing resources. To solve the problem formulated, we design an online algorithm based on the classic Lyapunov optimization method and block successive upper bound minimization (BSUM) method. Finally, the numerical simulation results characterize the performance of our algorithm and demonstrate its advantages both over the local computing scheme and the orthogonal multiple access (OMA)-based offloading scheme.Comment: 11 pages, 9 figure

    The open banking era:An optimal model for the emergency fund

    Get PDF
    The COVID-19 outbreak has negatively impacted the income of many bank users. Many users without emergency funds had difficulty coping with this unexpected event and had to use credit or apply to the government for bailout funds. Therefore, it is necessary to develop spending plans and deposit plans based on transaction data of users to assist them in saving sufficient emergency funds to cope with unexpected events. In this paper, an emergency fund model is proposed, and two optimization algorithms are applied to solve the optimal solution of the model. Secondly, an early warning mechanism is proposed, i.e. an unexpected prevention index and a consumption index are proposed to measure the ability of users to cope with unexpected events and the reasonableness of their expenditure respectively, which provides early warning to users. Finally, the model is experimented with real bank users and the performance of the model is analysed. The experiments show that compared to the no-planning scenario, the model helps users to save more emergency funds to cope with unexpected events, furthermore, the proposed model is real-time and sensitive.</p
    corecore