38,593 research outputs found
A Resource Allocation Algorithm for Ultra-Dense Networks Based on Deep Reinforcement Learning
The resource optimization of ultra-dense networks (UDNs) is critical to meet the huge demand of users for wireless data traffic. But the mainstream optimization algorithms have many problems, such as the poor optimization effect, and high computing load. This paper puts forward a wireless resource allocation algorithm based on deep reinforcement learning (DRL), which aims to maximize the total throughput of the entire network and transform the resource allocation problem into a deep Q-learning process. To effectively allocate resources in UDNs, the DRL algorithm was introduced to improve the allocation efficiency of wireless resources; the authors adopted the resource allocation strategy of the deep Q-network (DQN), and employed empirical repetition and target network to overcome the instability and divergence of the results caused by the previous network state, and to solve the overestimation of the Q value. Simulation results show that the proposed algorithm can maximize the total throughput of the network, while making the network more energy-efficient and stable. Thus, it is very meaningful to introduce the DRL to the research of UDN resource allocation
FEDRESOURCE: Federated Learning Based Resource Allocation in Modern Wireless Networks
Deep reinforcement learning can effectively deal with resource allocation (RA) in wireless networks. However, more complex networks can have slower learning speeds, and a lack of network adaptability requires new policies to be learned for newly introduced systems. To address these issues, a novel federated learning-based resource allocation (FEDRESOURCE) has been proposed in this paper which efficiently performs RA in wireless networks. The proposed FEDRESOURCE technique uses federated learning (FL) which is a ML technique that shares the DRL-based RA model between distributed systems and a cloud server to describe a policy. The regularized local loss that occurs in the network will be reduced by using a butterfly optimization technique, which increases the convergence of the FL algorithm. The suggested FL framework speeds up policy learning and allows for adoption by employing deep learning and the optimization technique. Experiments were conducted using a Python-based simulator and detailed numerical results for the wireless RA sub-problems. The theoretical results of the novel FEDRESOURCE algorithm have been validated in terms of transmission power, convergence of algorithm, throughput, and cost. The proposed FEDRESOURCE technique achieves maximum transmit power up to 27%, 55%, and 68% energy efficiency compared to Scheduling policy, Asynchronous FL framework, and Heterogeneous computation schemes respectively. The proposed FEDRESOURCE technique can increase discrimination accuracy by 1.7%, 1.2%, and 0.78% compared to the scheduling policy framework, Asynchronous FL framework, and Heterogeneous computation schemes respectively
Scheduling and Power Control for Wireless Multicast Systems via Deep Reinforcement Learning
Multicasting in wireless systems is a natural way to exploit the redundancy
in user requests in a Content Centric Network. Power control and optimal
scheduling can significantly improve the wireless multicast network's
performance under fading. However, the model based approaches for power control
and scheduling studied earlier are not scalable to large state space or
changing system dynamics. In this paper, we use deep reinforcement learning
where we use function approximation of the Q-function via a deep neural network
to obtain a power control policy that matches the optimal policy for a small
network. We show that power control policy can be learnt for reasonably large
systems via this approach. Further we use multi-timescale stochastic
optimization to maintain the average power constraint. We demonstrate that a
slight modification of the learning algorithm allows tracking of time varying
system statistics. Finally, we extend the multi-timescale approach to
simultaneously learn the optimal queueing strategy along with power control. We
demonstrate scalability, tracking and cross layer optimization capabilities of
our algorithms via simulations. The proposed multi-timescale approach can be
used in general large state space dynamical systems with multiple objectives
and constraints, and may be of independent interest.Comment: arXiv admin note: substantial text overlap with arXiv:1910.0530
Learning-based Decision Making in Wireless Communications
Fueled by emerging applications and exponential increase in data traffic, wireless networks have recently grown significantly and become more complex. In such large-scale complex wireless networks, it is challenging and, oftentimes, infeasible for conventional optimization methods to quickly solve critical decision-making problems. With this motivation, in this thesis, machine learning methods are developed and utilized for obtaining optimal/near-optimal solutions for timely decision making in wireless networks.
Content caching at the edge nodes is a promising technique to reduce the data traffic in next-generation wireless networks. In this context, we in the first part of the thesis study content caching at the wireless network edge using a deep reinforcement learning framework with Wolpertinger architecture. Initially, we develop a learning-based caching policy for a single base station aiming at maximizing the long-term cache hit rate. Then, we extend this study to a wireless communication network with multiple edge nodes. In particular, we propose deep actor-critic reinforcement learning based policies for both centralized and decentralized content caching.
Next, with the purpose of making efficient use of limited spectral resources, we develop a deep actor-critic reinforcement learning based framework for dynamic multichannel access. We consider both a single-user case and a scenario in which multiple users attempt to access channels simultaneously. In the single-user model, in order to evaluate the performance of the proposed channel access policy and the framework\u27s tolerance against uncertainty, we explore different channel switching patterns and different switching probabilities. In the case of multiple users, we analyze the probabilities of each user accessing channels with favorable channel conditions and the probability of collision.
Following the analysis of the proposed learning-based dynamic multichannel access policy, we consider adversarial attacks on it. In particular, we propose two adversarial policies, one based on feed-forward neural networks and the other based on deep reinforcement learning policies. Both attack strategies aim at minimizing the accuracy of a deep reinforcement learning based dynamic channel access agent, and we demonstrate and compare their performances.
Next, anomaly detection as an active hypothesis test problem is studied. Specifically, we study deep reinforcement learning based active sequential testing for anomaly detection. We assume that there is an unknown number of abnormal processes at a time and the agent can only check with one sensor in each sampling step. To maximize the confidence level of the decision and minimize the stopping time concurrently, we propose a deep actor-critic reinforcement learning framework that can dynamically select the sensor based on the posterior probabilities. Separately, we also regard the detection of threshold crossing as an anomaly detection problem, and analyze it via hierarchical generative adversarial networks (GANs).
In the final part of the thesis, to address state estimation and detection problems in the presence of noisy sensor observations and probing costs, we develop a soft actor-critic deep reinforcement learning framework. Moreover, considering Byzantine attacks, we design a GAN-based framework to identify the Byzantine sensors. To evaluate the proposed framework, we measure the performance in terms of detection accuracy, stopping time, and the total probing cost needed for detection
- …