21,104 research outputs found

    Multi-Flow Transmission in Wireless Interference Networks: A Convergent Graph Learning Approach

    Full text link
    We consider the problem of of multi-flow transmission in wireless networks, where data signals from different flows can interfere with each other due to mutual interference between links along their routes, resulting in reduced link capacities. The objective is to develop a multi-flow transmission strategy that routes flows across the wireless interference network to maximize the network utility. However, obtaining an optimal solution is computationally expensive due to the large state and action spaces involved. To tackle this challenge, we introduce a novel algorithm called Dual-stage Interference-Aware Multi-flow Optimization of Network Data-signals (DIAMOND). The design of DIAMOND allows for a hybrid centralized-distributed implementation, which is a characteristic of 5G and beyond technologies with centralized unit deployments. A centralized stage computes the multi-flow transmission strategy using a novel design of graph neural network (GNN) reinforcement learning (RL) routing agent. Then, a distributed stage improves the performance based on a novel design of distributed learning updates. We provide a theoretical analysis of DIAMOND and prove that it converges to the optimal multi-flow transmission strategy as time increases. We also present extensive simulation results over various network topologies (random deployment, NSFNET, GEANT2), demonstrating the superior performance of DIAMOND compared to existing methods

    QoS enhancement with deep learning-based interference prediction in mobile IoT

    Get PDF
    © 2019 Elsevier B.V. With the acceleration in mobile broadband, wireless infrastructure plays a significant role in Internet-of-Things (IoT) to ensure ubiquitous connectivity in mobile environment, making mobile IoT (mIoT) as center of attraction. Usually intelligent systems are accomplished through mIoT which demands for the increased data traffic. To meet the ever-increasing demands of mobile users, integration of small cells is a promising solution. For mIoT, small cells provide enhanced Quality-of-Service (QoS) with improved data rates. In this paper, mIoT-small cell based network in vehicular environment focusing city bus transit system is presented. However, integrating small cells in vehicles for mIoT makes resource allocation challenging because of the dynamic interference present between small cells which may impact cellular coverage and capacity negatively. This article proposes Threshold Percentage Dependent Interference Graph (TPDIG) using Deep Learning-based resource allocation algorithm for city buses mounted with moving small cells (mSCs). Long–Short Term Memory (LSTM) based neural networks are considered to predict city buses locations for interference determination between mSCs. Comparative analysis of resource allocation using TPDIG, Time Interval Dependent Interference Graph (TIDIG), and Global Positioning System Dependent Interference Graph (GPSDIG) is presented in terms of Resource Block (RB) usage and average achievable data rate of mIoT-mSC network

    Unsupervised Graph-based Learning Method for Sub-band Allocation in 6G Subnetworks

    Get PDF
    In this paper, we present an unsupervised approach for frequency sub-band allocation in wireless networks using graph-based learning. We consider a dense deployment of subnetworks in the factory environment with a limited number of sub-bands which must be optimally allocated to coordinate inter-subnetwork interference. We model the subnetwork deployment as a conflict graph and propose an unsupervised learning approach inspired by the graph colouring heuristic and the Potts model to optimize the sub-band allocation using graph neural networks. The numerical evaluation shows that the proposed method achieves close performance to the centralized greedy colouring sub-band allocation heuristic with lower computational time complexity. In addition, it incurs reduced signalling overhead compared to iterative optimization heuristics that require all the mutual interfering channel information. We further demonstrate that the method is robust to different network settings

    Interference-Limited Ultra-Reliable and Low-Latency Communications: Graph Neural Networks or Stochastic Geometry?

    Full text link
    In this paper, we aim to improve the Quality-of-Service (QoS) of Ultra-Reliability and Low-Latency Communications (URLLC) in interference-limited wireless networks. To obtain time diversity within the channel coherence time, we first put forward a random repetition scheme that randomizes the interference power. Then, we optimize the number of reserved slots and the number of repetitions for each packet to minimize the QoS violation probability, defined as the percentage of users that cannot achieve URLLC. We build a cascaded Random Edge Graph Neural Network (REGNN) to represent the repetition scheme and develop a model-free unsupervised learning method to train it. We analyze the QoS violation probability using stochastic geometry in a symmetric scenario and apply a model-based Exhaustive Search (ES) method to find the optimal solution. Simulation results show that in the symmetric scenario, the QoS violation probabilities achieved by the model-free learning method and the model-based ES method are nearly the same. In more general scenarios, the cascaded REGNN generalizes very well in wireless networks with different scales, network topologies, cell densities, and frequency reuse factors. It outperforms the model-based ES method in the presence of the model mismatch.Comment: Submitted to IEEE journal for possible publicatio

    Jamming-Resistant Learning in Wireless Networks

    Full text link
    We consider capacity maximization in wireless networks under adversarial interference conditions. There are n links, each consisting of a sender and a receiver, which repeatedly try to perform a successful transmission. In each time step, the success of attempted transmissions depends on interference conditions, which are captured by an interference model (e.g. the SINR model). Additionally, an adversarial jammer can render a (1-delta)-fraction of time steps unsuccessful. For this scenario, we analyze a framework for distributed learning algorithms to maximize the number of successful transmissions. Our main result is an algorithm based on no-regret learning converging to an O(1/delta)-approximation. It provides even a constant-factor approximation when the jammer exactly blocks a (1-delta)-fraction of time steps. In addition, we consider a stochastic jammer, for which we obtain a constant-factor approximation after a polynomial number of time steps. We also consider more general settings, in which links arrive and depart dynamically, and where each sender tries to reach multiple receivers. Our algorithms perform favorably in simulations.Comment: 22 pages, 2 figures, typos remove
    • …
    corecore