1,477 research outputs found

    Energy-Efficient NOMA Enabled Heterogeneous Cloud Radio Access Networks

    Get PDF
    Heterogeneous cloud radio access networks (H-CRANs) are envisioned to be promising in the fifth generation (5G) wireless networks. H-CRANs enable users to enjoy diverse services with high energy efficiency, high spectral efficiency, and low-cost operation, which are achieved by using cloud computing and virtualization techniques. However, H-CRANs face many technical challenges due to massive user connectivity, increasingly severe spectrum scarcity and energy-constrained devices. These challenges may significantly decrease the quality of service of users if not properly tackled. Non-orthogonal multiple access (NOMA) schemes exploit non-orthogonal resources to provide services for multiple users and are receiving increasing attention for their potential of improving spectral and energy efficiency in 5G networks. In this article a framework for energy-efficient NOMA H-CRANs is presented. The enabling technologies for NOMA H-CRANs are surveyed. Challenges to implement these technologies and open issues are discussed. This article also presents the performance evaluation on energy efficiency of H-CRANs with NOMA.Comment: This work has been accepted by IEEE Network. Pages 18, Figure

    Downlink Achievable Rate Analysis for FDD Massive MIMO Systems

    Get PDF
    Multiple-Input Multiple-Output (MIMO) systems with large-scale transmit antenna arrays, often called massive MIMO, are a very promising direction for 5G due to their ability to increase capacity and enhance both spectrum and energy efficiency. To get the benefit of massive MIMO systems, accurate downlink channel state information at the transmitter (CSIT) is essential for downlink beamforming and resource allocation. Conventional approaches to obtain CSIT for FDD massive MIMO systems require downlink training and CSI feedback. However, such training will cause a large overhead for massive MIMO systems because of the large dimensionality of the channel matrix. In this dissertation, we improve the performance of FDD massive MIMO networks in terms of downlink training overhead reduction, by designing an efficient downlink beamforming method and developing a new algorithm to estimate the channel state information based on compressive sensing techniques. First, we design an efficient downlink beamforming method based on partial CSI. By exploiting the relationship between uplink direction of arrivals (DoAs) and downlink direction of departures (DoDs), we derive an expression for estimated downlink DoDs, which will be used for downlink beamforming. Second, By exploiting the sparsity structure of downlink channel matrix, we develop an algorithm that selects the best features from the measurement matrix to obtain efficient CSIT acquisition that can reduce the downlink training overhead compared with conventional LS/MMSE estimators. In both cases, we compare the performance of our proposed beamforming method with traditional methods in terms of downlink achievable rate and simulation results show that our proposed method outperform the traditional beamforming methods

    Performance Analysis of Channel Extrapolation in FDD Massive MIMO Systems

    Full text link
    Channel estimation for the downlink of frequency division duplex (FDD) massive MIMO systems is well known to generate a large overhead as the amount of training generally scales with the number of transmit antennas in a MIMO system. In this paper, we consider the solution of extrapolating the channel frequency response from uplink pilot estimates to the downlink frequency band, which completely removes the training overhead. We first show that conventional estimators fail to achieve reasonable accuracy. We propose instead to use high-resolution channel estimation. We derive theoretical lower bounds (LB) for the mean squared error (MSE) of the extrapolated channel. Assuming that the paths are well separated, the LB is simplified in an expression that gives considerable physical insight. It is then shown that the MSE is inversely proportional to the number of receive antennas while the extrapolation performance penalty scales with the square of the ratio of the frequency offset and the training bandwidth. The channel extrapolation performance is validated through numeric simulations and experimental measurements taken in an anechoic chamber. Our main conclusion is that channel extrapolation is a viable solution for FDD massive MIMO systems if accurate system calibration is performed and favorable propagation conditions are present.Comment: arXiv admin note: substantial text overlap with arXiv:1902.0684

    Massive MIMO for Internet of Things (IoT) Connectivity

    Full text link
    Massive MIMO is considered to be one of the key technologies in the emerging 5G systems, but also a concept applicable to other wireless systems. Exploiting the large number of degrees of freedom (DoFs) of massive MIMO essential for achieving high spectral efficiency, high data rates and extreme spatial multiplexing of densely distributed users. On the one hand, the benefits of applying massive MIMO for broadband communication are well known and there has been a large body of research on designing communication schemes to support high rates. On the other hand, using massive MIMO for Internet-of-Things (IoT) is still a developing topic, as IoT connectivity has requirements and constraints that are significantly different from the broadband connections. In this paper we investigate the applicability of massive MIMO to IoT connectivity. Specifically, we treat the two generic types of IoT connections envisioned in 5G: massive machine-type communication (mMTC) and ultra-reliable low-latency communication (URLLC). This paper fills this important gap by identifying the opportunities and challenges in exploiting massive MIMO for IoT connectivity. We provide insights into the trade-offs that emerge when massive MIMO is applied to mMTC or URLLC and present a number of suitable communication schemes. The discussion continues to the questions of network slicing of the wireless resources and the use of massive MIMO to simultaneously support IoT connections with very heterogeneous requirements. The main conclusion is that massive MIMO can bring benefits to the scenarios with IoT connectivity, but it requires tight integration of the physical-layer techniques with the protocol design.Comment: Submitted for publicatio

    Millimeter Wave Cellular Networks: A MAC Layer Perspective

    Full text link
    The millimeter wave (mmWave) frequency band is seen as a key enabler of multi-gigabit wireless access in future cellular networks. In order to overcome the propagation challenges, mmWave systems use a large number of antenna elements both at the base station and at the user equipment, which lead to high directivity gains, fully-directional communications, and possible noise-limited operations. The fundamental differences between mmWave networks and traditional ones challenge the classical design constraints, objectives, and available degrees of freedom. This paper addresses the implications that highly directional communication has on the design of an efficient medium access control (MAC) layer. The paper discusses key MAC layer issues, such as synchronization, random access, handover, channelization, interference management, scheduling, and association. The paper provides an integrated view on MAC layer issues for cellular networks, identifies new challenges and tradeoffs, and provides novel insights and solution approaches.Comment: 21 pages, 9 figures, 2 tables, to appear in IEEE Transactions on Communication
    • …
    corecore