16 research outputs found
Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive h -mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks
Joint Optimization of Deployment and Trajectory in UAV and IRS-Assisted IoT Data Collection System
Unmanned aerial vehicles (UAVs) can be applied in many Internet of Things
(IoT) systems, e.g., smart farms, as a data collection platform. However, the
UAV-IoT wireless channels may be occasionally blocked by trees or high-rise
buildings. An intelligent reflecting surface (IRS) can be applied to improve
the wireless channel quality by smartly reflecting the signal via a large
number of low-cost passive reflective elements. This article aims to minimize
the energy consumption of the system by jointly optimizing the deployment and
trajectory of the UAV. The problem is formulated as a
mixed-integer-and-nonlinear programming (MINLP), which is challenging to
address by the traditional solution, because the solution may easily fall into
the local optimal. To address this issue, we propose a joint optimization
framework of deployment and trajectory (JOLT), where an adaptive whale
optimization algorithm (AWOA) is applied to optimize the deployment of the UAV,
and an elastic ring self-organizing map (ERSOM) is introduced to optimize the
trajectory of the UAV. Specifically, in AWOA, a variable-length population
strategy is applied to find the optimal number of stop points, and a nonlinear
parameter a and a partial mutation rule are introduced to balance the
exploration and exploitation. In ERSOM, a competitive neural network is also
introduced to learn the trajectory of the UAV by competitive learning, and a
ring structure is presented to avoid the trajectory intersection. Extensive
experiments are carried out to show the effectiveness of the proposed JOLT
framework.Comment: 11 pages, 7 figures, 4 table
Deep learning based joint resource scheduling algorithms for hybrid MEC networks
In this paper, we consider a hybrid mobile edge computing (H-MEC) platform, which includes ground stations (GSs), ground vehicles (GVs) and unmanned aerial vehicle (UAVs), all with mobile edge cloud installed to enable user equipments (UEs) or Internet of thing (IoT) devices with intensive computing tasks to offload. Our objective is to obtain an online offloading algorithm to minimize the energy consumption of all the UEs, by jointly optimizing the positions of GVs and UAVs, user association and resource allocation in real-time, while considering the dynamic environment. To this end, we propose a hybrid deep learning based online offloading (H2O) framework where a large-scale path-loss fuzzy c-means (LSFCM) algorithm is first proposed and used to predict the optimal positions of GVs and UAVs. Secondly, a fuzzy membership matrix U-based particle swarm optimization (U-PSO) algorithm is applied to solve the mixed integer nonlinear programming (MINLP) problems and generate the sample datasets for the deep neural network (DNN) where the fuzzy membership matrix can capture the small-scale fading effects and the information of mutual interference. Thirdly, a DNN with the scheduling layer is introduced to provide user association and computing resource allocation under the practical latency requirement of the tasks and limited available computing resource of H-MEC. In addition, different from traditional DNN predictor, we only input one UE’s information to the DNN at one time, which will be suitable for the scenarios where the number of UE is varying and avoid the curse of dimensionality in DNN
Large AI Model Empowered Multimodal Semantic Communications
Multimodal signals, including text, audio, image and video, can be integrated
into Semantic Communication (SC) for providing an immersive experience with low
latency and high quality at the semantic level. However, the multimodal SC has
several challenges, including data heterogeneity, semantic ambiguity, and
signal fading. Recent advancements in large AI models, particularly in
Multimodal Language Model (MLM) and Large Language Model (LLM), offer potential
solutions for these issues. To this end, we propose a Large AI Model-based
Multimodal SC (LAM-MSC) framework, in which we first present the MLM-based
Multimodal Alignment (MMA) that utilizes the MLM to enable the transformation
between multimodal and unimodal data while preserving semantic consistency.
Then, a personalized LLM-based Knowledge Base (LKB) is proposed, which allows
users to perform personalized semantic extraction or recovery through the LLM.
This effectively addresses the semantic ambiguity. Finally, we apply the
Conditional Generative adversarial networks-based channel Estimation (CGE) to
obtain Channel State Information (CSI). This approach effectively mitigates the
impact of fading channels in SC. Finally, we conduct simulations that
demonstrate the superior performance of the LAM-MSC framework.Comment: To be submitted for journal publicatio
LAMBO: Large Language Model Empowered Edge Intelligence
Next-generation edge intelligence is anticipated to bring huge benefits to
various applications, e.g., offloading systems. However, traditional deep
offloading architectures face several issues, including heterogeneous
constraints, partial perception, uncertain generalization, and lack of
tractability. In this context, the integration of offloading with large
language models (LLMs) presents numerous advantages. Therefore, we propose an
LLM-Based Offloading (LAMBO) framework for mobile edge computing (MEC), which
comprises four components: (i) Input embedding (IE), which is used to represent
the information of the offloading system with constraints and prompts through
learnable vectors with high quality; (ii) Asymmetric encoderdecoder (AED)
model, which is a decision-making module with a deep encoder and a shallow
decoder. It can achieve high performance based on multi-head self-attention
schemes; (iii) Actor-critic reinforcement learning (ACRL) module, which is
employed to pre-train the whole AED for different optimization tasks under
corresponding prompts; and (iv) Active learning from expert feedback (ALEF),
which can be used to finetune the decoder part of the AED while adapting to
dynamic environmental changes. Our simulation results corroborate the
advantages of the proposed LAMBO framework.Comment: To be submitted for possible journal publicatio
Large AI Model-Based Semantic Communications
Semantic communication (SC) is an emerging intelligent paradigm, offering
solutions for various future applications like metaverse, mixed-reality, and
the Internet of everything. However, in current SC systems, the construction of
the knowledge base (KB) faces several issues, including limited knowledge
representation, frequent knowledge updates, and insecure knowledge sharing.
Fortunately, the development of the large AI model provides new solutions to
overcome above issues. Here, we propose a large AI model-based SC framework
(LAM-SC) specifically designed for image data, where we first design the
segment anything model (SAM)-based KB (SKB) that can split the original image
into different semantic segments by universal semantic knowledge. Then, we
present an attention-based semantic integration (ASI) to weigh the semantic
segments generated by SKB without human participation and integrate them as the
semantic-aware image. Additionally, we propose an adaptive semantic compression
(ASC) encoding to remove redundant information in semantic features, thereby
reducing communication overhead. Finally, through simulations, we demonstrate
the effectiveness of the LAM-SC framework and the significance of the large AI
model-based KB development in future SC paradigms.Comment: Plan to submit it to journal for possible publicatio
Large Generative Model Assisted 3D Semantic Communication
Semantic Communication (SC) is a novel paradigm for data transmission in 6G.
However, there are several challenges posed when performing SC in 3D scenarios:
1) 3D semantic extraction; 2) Latent semantic redundancy; and 3) Uncertain
channel estimation. To address these issues, we propose a Generative AI Model
assisted 3D SC (GAM-3DSC) system. Firstly, we introduce a 3D Semantic Extractor
(3DSE), which employs generative AI models, including Segment Anything Model
(SAM) and Neural Radiance Field (NeRF), to extract key semantics from a 3D
scenario based on user requirements. The extracted 3D semantics are represented
as multi-perspective images of the goal-oriented 3D object. Then, we present an
Adaptive Semantic Compression Model (ASCM) for encoding these multi-perspective
images, in which we use a semantic encoder with two output heads to perform
semantic encoding and mask redundant semantics in the latent semantic space,
respectively. Next, we design a conditional Generative adversarial network and
Diffusion model aided-Channel Estimation (GDCE) to estimate and refine the
Channel State Information (CSI) of physical channels. Finally, simulation
results demonstrate the advantages of the proposed GAM-3DSC system in
effectively transmitting the goal-oriented 3D scenario.Comment: 13 pages,13 figures,1 tabl
MARS: A DRL-Based Multi-Task Resource Scheduling Framework for UAV With IRS-Assisted Mobile Edge Computing System
This article studies a dynamic Mobile Edge Computing (MEC) system assisted by Unmanned Aerial Vehicles (UAVs) and Intelligent Reflective Surfaces (IRSs). We propose a scaleable resource scheduling algorithm to minimize the energy consumption of all UEs and UAVs in the MEC system with a variable number of UAVs. We propose a Multi-tAsk Resource Scheduling (MARS) framework based on Deep Reinforcement Learning (DRL) to solve the problem. First, we present a novel Advantage Actor-Critic (A2C) structure with the state-value critic and entropy-enhanced actor to reduce variance and enhance the policy search of DRL. Then, we present a multi-head agent with three different heads in which a classification head is applied to make offloading decisions and a regression head is presented to allocate computational resources, and a critic head is introduced to estimate the state value of the selected action. Next, we introduce a multi-task controller to adjust the agent to adapt to the varying number of UAVs by loading or unloading a part of weights in the agent. Finally, a Light Wolf Search (LWS) is introduced as the action refinement to enhance the exploration in the dynamic action space. The numerical results demonstrate the feasibility and efficiency of the MARS framework
Distributed Resource Scheduling for Large-Scale MEC Systems: A Multi-Agent Ensemble Deep Reinforcement Learning with Imitation Acceleration
In large-scale mobile edge computing (MEC) systems, the task latency and energy consumption are important for massive resource-consuming and delay-sensitive Internet of things devices (IoTDs). Against this background, we propose a distributed intelligent resource scheduling (DIRS) framework to minimize the sum of task latency and energy consumption for all IoTDs, which can be formulated as a mixed integer nonlinear programming. The DIRS framework includes centralized training relying on the global information and distributed decision making by each agent deployed in each MEC server. Specifically, we first introduce a novel multi-agent ensemble-assisted distributed deep reinforcement learning (DRL) architecture, which can simplify the overall neural network structure of each agent by partitioning the state space and also improve the performance of a single agent by combining decisions of all the agents. Secondly, we apply action refinement to enhance the exploration ability of the proposed DIRS framework, where the near-optimal state-action pairs are obtained by a novel Levy flight search. Finally, an imitation acceleration scheme is presented to pre-train all the agents, which can significantly accelerate the learning process of the proposed framework through learning the professional experience from a small amount of demonstration data. The simulation results in three typical scenarios demonstrate that the proposed DIRS framework is efficient and outperforms the existing benchmark schemes