3,454 research outputs found

    Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks

    Get PDF
    An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive h -mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks

    Removing Channel Estimation by Location-Only Based Deep Learning for RIS Aided Mobile Edge Computing

    Get PDF
    In this paper, we investigate a deep learning architecture for lightweight online implementation of a reconfigurable intelligent surface (RIS)-aided multi-user mobile edge computing (MEC) system, where the optimized performance can be achieved based on user equipment’s (UEs’) location-only information. Assuming that each UE is endowed with a limited energy budget, we aim at maximizing the total completed task-input bits (TCTB) of all UEs within a given time slot, through jointly optimizing the RIS reflecting coefficients, the receive beamforming vectors, and UEs’ energy partition strategies for local computing and computation offloading. Due to the coupled optimization variables, a three-step block coordinate descending (BCD) algorithm is first proposed to effectively solve the formulated TCTB maximization problem iteratively with guaranteed convergence. The location-only deep learning architecture is then constructed to emulate the proposed BCD optimization algorithm, through which the pilot channel estimation and feedback can be removed for online implementation with low complexity. The simulation results reveal a close match between the performance of the BCD optimization algorithm and the location-only data-driven architecture, all with superior performance to existing benchmarks

    A Novel Cross Entropy Approach for Offloading Learning in Mobile Edge Computing

    Get PDF
    In this letter, we propose a novel offloading learning approach to compromise energy consumption and latency in a multi-tier network with mobile edge computing. In order to solve this integer programming problem, instead of using conventional optimization tools, we apply a cross entropy approach with iterative learning of the probability of elite solution samples. Compared to existing methods, the proposed one in this network permits a parallel computing architecture and is verified to be computationally very efficient. Specifically, it achieves performance close to the optimal and performs well with different choices of the values of hyperparameters in the proposed learning approach
    • …
    corecore