223 research outputs found

    On the Performance of Packet Aggregation in IEEE 802.11ac MU-MIMO WLANs

    Full text link
    Multi-user spatial multiplexing combined with packet aggregation can significantly increase the performance of Wireless Local Area Networks (WLANs). In this letter, we present and evaluate a simple technique to perform packet aggregation in IEEE 802.11ac MU-MIMO (Multi-user Multiple Input Multiple Output) WLANs. Results show that in non-saturation conditions both the number of active stations (STAs) and the queue size have a significant impact on the system performance. If the number of stations is excessively high, the heterogeneity of destinations in the packets contained in the queue makes it difficult to take full advantage of packet aggregation. This effect can be alleviated by increasing the queue size, which increases the chances to schedule a large number of packets at each transmission, hence improving the system throughput at the cost of a higher delay

    Radio Resource Management for New Application Scenarios in 5G: Optimization and Deep Learning

    Get PDF
    The fifth-generation (5G) New Radio (NR) systems are expected to support a wide range of emerging applications with diverse Quality-of-Service (QoS) requirements. New application scenarios in 5G NR include enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable low-latency communications (URLLC). New wireless architectures, such as full-dimension (FD) massive multiple-input multiple-output (MIMO) and mobile edge computing (MEC) system, and new coding scheme, such as short block-length channel coding, are envisioned as enablers of QoS requirements for 5G NR applications. Resource management in these new wireless architectures is crucial in guaranteeing the QoS requirements of 5G NR systems. The traditional optimization problems, such as subcarriers and user association, are usually non-convex or Non-deterministic Polynomial-time (NP)-hard. It is time-consuming and computing-expensive to find the optimal solution, especially in a large-scale network. To solve these problems, one approach is to design a low-complexity algorithm with near optimal performance. In some cases, the low complexity algorithms are hard to obtain, deep learning can be used as an accurate approximator that maps environment parameters, such as the channel state information and traffic state, to the optimal solutions. In this thesis, we design low-complexity optimization algorithms, and deep learning frameworks in different architectures of 5G NR to resolve optimization problems subject to QoS requirements. First, we propose a low-complexity algorithm for a joint cooperative beamforming and user association problem for eMBB in 5G NR to maximize the network capacity. Next, we propose a deep learning (DL) framework to optimize user association, resource allocation, and offloading probabilities for delay-tolerant services and URLLC in 5G NR. Finally, we address the issue of time-varying traffic and network conditions on resource management in 5G NR

    Traffic-Aware Hierarchical Beam Selection for Cell-Free Massive MIMO

    Full text link
    Beam selection for joint transmission in cell-free massive multi-input multi-output systems faces the problem of extremely high training overhead and computational complexity. The traffic-aware quality of service additionally complicates the beam selection problem. To address this issue, we propose a traffic-aware hierarchical beam selection scheme performed in a dual timescale. In the long-timescale, the central processing unit collects wide beam responses from base stations (BSs) to predict the power profile in the narrow beam space with a convolutional neural network, based on which the cascaded multiple-BS beam space is carefully pruned. In the short-timescale, we introduce a centralized reinforcement learning (RL) algorithm to maximize the satisfaction rate of delay w.r.t. beam selection within multiple consecutive time slots. Moreover, we put forward three scalable distributed algorithms including hierarchical distributed Lyapunov optimization, fully distributed RL, and centralized training with decentralized execution of RL to achieve better scalability and better tradeoff between the performance and the execution signal overhead. Numerical results demonstrate that the proposed schemes significantly reduce both model training cost and beam training overhead and are easier to meet the user-specific delay requirement, compared to existing methods.Comment: 13 pages, 11 figures, part of this work has been accepted by the IEEE International Conference on Wireless Communications and Signal Processing (WCSP) 202
    corecore