5 research outputs found

    Distributed Attacks over Federated Reinforcement Learning-enabled Cell Sleep Control

    Full text link
    Federated learning (FL) is particularly useful in wireless networks due to its distributed implementation and privacy-preserving features. However, as a distributed learning system, FL can be vulnerable to malicious attacks from both internal and external sources. Our work aims to investigate the attack models in a FL-enabled wireless networks. Specifically, we consider a cell sleep control scenario, and apply federated reinforcement learning to improve energy-efficiency. We design three attacks, namely free rider attacks, Byzantine data poisoning attacks and backdoor attacks. The simulation results show that the designed attacks can degrade the network performance and lead to lower energy-efficiency. Moreover, we also explore possible ways to mitigate the above attacks. We design a defense model called refined-Krum to defend against attacks by enabling a secure aggregation on the global server. The proposed refined- Krum scheme outperforms the existing Krum scheme and can effectively prevent wireless networks from malicious attacks, improving the system energy-efficiency performance

    Energy optimization in ultra-dense radio access networks via traffic-aware cell switching

    Get PDF
    We propose a reinforcement learning based cell switching algorithm to minimize the energy consumption in ultra-dense deployments without compromising the quality of service (QoS) experienced by the users. In this regard, the proposed method can intelligently learn which small cells (SCs) to turn off at any given time based on the traffic load of the SCs and the macro cell. To validate the idea, we used the open call detail record (CDR) data set from the city of Milan, Italy, and tested our algorithm against typical operational benchmark solutions. With the obtained results, we demonstrate exactly when and how the proposed method can provide energy savings, and moreover how this happens without reducing QoS of users. Most importantly, we show that our solution has a very similar performance to the exhaustive search, with the advantage of being scalable and less complex

    Graph neural network-based cell switching for energy optimization in ultra-dense heterogeneous networks

    Get PDF
    The development of ultra-dense heterogeneous networks (HetNets) will cause a significant rise in energy consumption with large-scale base station (BS) deployments, requiring cellular networks to be more energy efficient to reduce operational expense and promote sustainability. Cell switching is an effective method to achieve the energy efficiency goals, but traditional heuristic cell switching algorithms are computationally demanding with limited generalization abilities for ultra-dense HetNet applications, motivating the usage of machine learning techniques for adaptive cell switching. Graph neural networks (GNNs) are powerful deep learning models with strong generalization abilities but receive little attention for cell switching. This paper proposes a GNN-based cell switching solution (GBCSS) that has a smaller computational complexity than existing heuristic algorithms. The presented performance evaluation uses the Milan telecommunication dataset based on real-world call detail records, comparing GBCSS with a traditional exhaustive search (ES) algorithm, a state-of-the-art learning-based algorithm, and the baseline without cell switching. Results indicate that GBCSS achieves a 10.41% energy efficiency gain when compared with the baseline and achieves 75.76% of the optimal performance obtained with ES algorithm. The results also demonstrate GBCSS’ significant scalability and generalization abilities to differing load conditions and the number of BSs, suggesting this approach is well-suited to ultra-dense HetNet deployment

    Reinforcement Learning for Delay-Constrained Energy-Aware Small Cells with Multi-Sleeping Control

    Get PDF
    Virtual ConferenceInternational audienceIn 5G networks, specific requirements are defined on the periodicity of Synchronization Signaling (SS) bursts. This imposes a constraint on the maximum period a Base Station (BS) can be deactivated. On the other hand, BS densification is expected in 5G architecture. This will cause a drastic increase in the network energy consumption followed by a complex interference management. In this paper, we study the Energy-Delay-Tradeoff (EDT) problem in a Heterogeneous Network (HetNet) where small cells can switch to different sleep mode levels to save energy while maintaining a good Quality of Service (QoS). We propose a distributed Q-learning algorithm controller for small cells that adapts the cell activity while taking into account the co-channel interference between the cells. Our numerical results show that multi-level sleep scheme outperforms binary sleep scheme with an energy saving up to 80% in the case when the users are delay tolerant, and while respecting the periodicity of the SS bursts in 5G

    Adaptive vehicular networking with Deep Learning

    Get PDF
    Vehicular networks have been identified as a key enabler for future smart traffic applications aiming to improve on-road safety, increase road traffic efficiency, or provide advanced infotainment services to improve on-board comfort. However, the requirements of smart traffic applications also place demands on vehicular networks’ quality in terms of high data rates, low latency, and reliability, while simultaneously meeting the challenges of sustainability, green network development goals and energy efficiency. The advances in vehicular communication technologies combined with the peculiar characteristics of vehicular networks have brought challenges to traditional networking solutions designed around fixed parameters using complex mathematical optimisation. These challenges necessitate greater intelligence to be embedded in vehicular networks to realise adaptive network optimisation. As such, one promising solution is the use of Machine Learning (ML) algorithms to extract hidden patterns from collected data thus formulating adaptive network optimisation solutions with strong generalisation capabilities. In this thesis, an overview of the underlying technologies, applications, and characteristics of vehicular networks is presented, followed by the motivation of using ML and a general introduction of ML background. Additionally, a literature review of ML applications in vehicular networks is also presented drawing on the state-of-the-art of ML technology adoption. Three key challenging research topics have been identified centred around network optimisation and ML deployment aspects. The first research question and contribution focus on mobile Handover (HO) optimisation as vehicles pass between base stations; a Deep Reinforcement Learning (DRL) handover algorithm is proposed and evaluated against the currently deployed method. Simulation results suggest that the proposed algorithm can guarantee optimal HO decision in a realistic simulation setup. The second contribution explores distributed radio resource management optimisation. Two versions of a Federated Learning (FL) enhanced DRL algorithm are proposed and evaluated against other state-of-the-art ML solutions. Simulation results suggest that the proposed solution outperformed other benchmarks in overall resource utilisation efficiency, especially in generalisation scenarios. The third contribution looks at energy efficiency optimisation on the network side considering a backdrop of sustainability and green networking. A cell switching algorithm was developed based on a Graph Neural Network (GNN) model and the proposed energy efficiency scheme is able to achieve almost 95% of the metric normalised energy efficiency compared against the “ideal” optimal energy efficiency benchmark and is capable of being applied in many more general network configurations compared with the state-of-the-art ML benchmark
    corecore