3,720 research outputs found

    Propagate & Distill: Towards Effective Graph Learners Using Propagation-Embracing MLPs

    Full text link
    Recent studies attempted to utilize multilayer perceptrons (MLPs) to solve semisupervised node classification on graphs, by training a student MLP by knowledge distillation from a teacher graph neural network (GNN). While previous studies have focused mostly on training the student MLP by matching the output probability distributions between the teacher and student models during distillation, it has not been systematically studied how to inject the structural information in an explicit and interpretable manner. Inspired by GNNs that separate feature transformation TT and propagation Π\Pi, we re-frame the distillation process as making the student MLP learn both TT and Π\Pi. Although this can be achieved by applying the inverse propagation Π−1\Pi^{-1} before distillation from the teacher, it still comes with a high computational cost from large matrix multiplications during training. To solve this problem, we propose Propagate & Distill (P&D), which propagates the output of the teacher before distillation, which can be interpreted as an approximate process of the inverse propagation. We demonstrate that P&D can readily improve the performance of the student MLP.Comment: 17 pages, 2 figures, 8 tables; 2nd Learning on Graphs Conference (LoG 2023) (Please cite our conference version.). arXiv admin note: substantial text overlap with arXiv:2311.1175

    Unveiling the Unseen Potential of Graph Learning through MLPs: Effective Graph Learners Using Propagation-Embracing MLPs

    Full text link
    Recent studies attempted to utilize multilayer perceptrons (MLPs) to solve semi-supervised node classification on graphs, by training a student MLP by knowledge distillation (KD) from a teacher graph neural network (GNN). While previous studies have focused mostly on training the student MLP by matching the output probability distributions between the teacher and student models during KD, it has not been systematically studied how to inject the structural information in an explicit and interpretable manner. Inspired by GNNs that separate feature transformation TT and propagation Π\Pi, we re-frame the KD process as enabling the student MLP to explicitly learn both TT and Π\Pi. Although this can be achieved by applying the inverse propagation Π−1\Pi^{-1} before distillation from the teacher GNN, it still comes with a high computational cost from large matrix multiplications during training. To solve this problem, we propose Propagate & Distill (P&D), which propagates the output of the teacher GNN before KD and can be interpreted as an approximate process of the inverse propagation Π−1\Pi^{-1}. Through comprehensive evaluations using real-world benchmark datasets, we demonstrate the effectiveness of P&D by showing further performance boost of the student MLP.Comment: 35 pages, 5 figures, 8 table

    Parallel Opportunistic Routing in Wireless Networks

    Full text link
    We study benefits of opportunistic routing in a large wireless ad hoc network by examining how the power, delay, and total throughput scale as the number of source- destination pairs increases up to the operating maximum. Our opportunistic routing is novel in a sense that it is massively parallel, i.e., it is performed by many nodes simultaneously to maximize the opportunistic gain while controlling the inter-user interference. The scaling behavior of conventional multi-hop transmission that does not employ opportunistic routing is also examined for comparison. Our results indicate that our opportunistic routing can exhibit a net improvement in overall power--delay trade-off over the conventional routing by providing up to a logarithmic boost in the scaling law. Such a gain is possible since the receivers can tolerate more interference due to the increased received signal power provided by the multi-user diversity gain, which means that having more simultaneous transmissions is possible.Comment: 18 pages, 7 figures, Under Review for Possible Publication in IEEE Transactions on Information Theor

    Opportunistic Interference Mitigation Achieves Optimal Degrees-of-Freedom in Wireless Multi-cell Uplink Networks

    Full text link
    We introduce an opportunistic interference mitigation (OIM) protocol, where a user scheduling strategy is utilized in KK-cell uplink networks with time-invariant channel coefficients and base stations (BSs) having MM antennas. Each BS opportunistically selects a set of users who generate the minimum interference to the other BSs. Two OIM protocols are shown according to the number SS of simultaneously transmitting users per cell: opportunistic interference nulling (OIN) and opportunistic interference alignment (OIA). Then, their performance is analyzed in terms of degrees-of-freedom (DoFs). As our main result, it is shown that KMKM DoFs are achievable under the OIN protocol with MM selected users per cell, if the total number NN of users in a cell scales at least as SNR(K−1)M\text{SNR}^{(K-1)M}. Similarly, it turns out that the OIA scheme with SS(<M<M) selected users achieves KSKS DoFs, if NN scales faster than SNR(K−1)S\text{SNR}^{(K-1)S}. These results indicate that there exists a trade-off between the achievable DoFs and the minimum required NN. By deriving the corresponding upper bound on the DoFs, it is shown that the OIN scheme is DoF optimal. Finally, numerical evaluation, a two-step scheduling method, and the extension to multi-carrier scenarios are shown.Comment: 18 pages, 3 figures, Submitted to IEEE Transactions on Communication
    • …
    corecore