67,767 research outputs found

    Graph Neural Networks for Power Allocation in Wireless Networks with Full Duplex Nodes

    Full text link
    Due to mutual interference between users, power allocation problems in wireless networks are often non-convex and computationally challenging. Graph neural networks (GNNs) have recently emerged as a promising approach to tackling these problems and an approach that exploits the underlying topology of wireless networks. In this paper, we propose a novel graph representation method for wireless networks that include full-duplex (FD) nodes. We then design a corresponding FD Graph Neural Network (F-GNN) with the aim of allocating transmit powers to maximise the network throughput. Our results show that our F-GNN achieves state-of-art performance with significantly less computation time. Besides, F-GNN offers an excellent trade-off between performance and complexity compared to classical approaches. We further refine this trade-off by introducing a distance-based threshold for inclusion or exclusion of edges in the network. We show that an appropriately chosen threshold reduces required training time by roughly 20% with a relatively minor loss in performance

    Parallel growing and training of neural networks using output parallelism

    Get PDF
    In order to find an appropriate architecture for a large-scale real-world application automatically and efficiently, a natural method is to divide the original problem into a set of sub-problems. In this paper, we propose a simple neural network task decomposition method based on output parallelism. By using this method, a problem can be divided flexibly into several sub-problems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for one sub-problem) is responsible for producing a fraction of the output vector of the original problem. The hidden structure for the original problem’s output units are decoupled. These modules can be grown and trained in parallel on parallel processing elements. Incorporated with a constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Some benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computational time, increase learning speed and improve generalization accuracy for both classification and regression problems

    Random neural network based cognitive-eNodeB deployment in LTE uplink

    Get PDF

    Wireless Interference Identification with Convolutional Neural Networks

    Full text link
    The steadily growing use of license-free frequency bands requires reliable coexistence management for deterministic medium utilization. For interference mitigation, proper wireless interference identification (WII) is essential. In this work we propose the first WII approach based upon deep convolutional neural networks (CNNs). The CNN naively learns its features through self-optimization during an extensive data-driven GPU-based training process. We propose a CNN example which is based upon sensing snapshots with a limited duration of 12.8 {\mu}s and an acquisition bandwidth of 10 MHz. The CNN differs between 15 classes. They represent packet transmissions of IEEE 802.11 b/g, IEEE 802.15.4 and IEEE 802.15.1 with overlapping frequency channels within the 2.4 GHz ISM band. We show that the CNN outperforms state-of-the-art WII approaches and has a classification accuracy greater than 95% for signal-to-noise ratio of at least -5 dB

    Digital Communication Channel Equaliser using Single Generalised Neuron

    Get PDF
    Equalisation is necessary in a digital communication system to mitigate the effect of inter-symbol interference and other nonlinear distortions. A new reduced complexity approach to digital communication channel equalization is proposed based on a single generalised neuron (GN). Since it uses only a single GN, there is no problem of selection of initial architecture of the neural network giving optimum performance. It has less computational requirements giving rise to reduced training and computation time. The simulation results show that proposed equaliser bit error rate (BER) performance approaches to optimal Bayesian solution.Defence Science Journal, 2009, 59(5), pp.524-529, DOI:http://dx.doi.org/10.14429/dsj.59.155
    corecore