14 research outputs found

    Rate splitting in MIMO RIS-assisted systems with hardware impairments and improper signaling

    Get PDF
    In this paper, we propose an optimization framework for rate splitting (RS) techniques in multiple-input multiple-output (MIMO) reconfigurable intelligent surface (RIS)-assisted systems, possibly with I/Q imbalance (IQI). This framework can be applied to any optimization problem in which the objective and/or constraints are linear functions of the rates and/or transmit covariance matrices. Such problems include minimum-weighted and weighted-sum rate maximization, total power minimization for a target rate, minimum-weighted energy efficiency (EE) and global EE maximization. The framework may be applied to any interference-limited system with hardware impairments. For the sake of illustration, we consider a multicell MIMO RIS-assisted broadcast channel (BC) in which the base stations (BSs) and/or the users may suffer from IQI. Since IQI generates improper noise, we consider improper Gaussian signaling (IGS) as an interference-management technique that can additionally compensate for IQI. We show that RS when combined with IGS can substantially improve the spectral and energy efficiency of overloaded networks (i.e., when the number of users per cell is larger than the number of transmit/receive antennas).The work of Ignacio Santamaria has been partly supported by the project ADELE PID2019-104958RB-C43, funded by MCIN/AEI/10.13039/501100011033. The work of Eduard Jorswieck was supported in part by the Federal Ministry of Education and Research (BMBF, Germany) in the program of “Souver¨an. Digital. Vernetzt.” joint project 6G-RIC, project identification number: 16KISK020K and 16KISK031

    Optimization of Rate-Splitting Multiple Access in Beyond Diagonal RIS-assisted URLLC Systems

    Full text link
    This paper proposes a general optimization framework for rate splitting multiple access (RSMA) in beyond diagonal (BD) reconfigurable intelligent surface (RIS) assisted ultra-reliable low-latency communications (URLLC) systems. This framework can solve a large family of optimization problems in which the objective and/or constraints are linear functions of the rates and/or energy efficiency (EE) of users. Using this framework, we show that RSMA and RIS can be mutually beneficial tools when the system is overloaded, i.e., when the number of users per cell is higher than the number of base station (BS) antennas. Additionally, we show that the benefits of RSMA increase when the packets are shorter and/or the reliability constraint is more stringent. Furthermore, we show that the RSMA benefits increase with the number of users per cell and decrease with the number of BS antennas. Finally, we show that RIS (either diagonal or BD) can highly improve the system performance, and BD-RIS outperforms regular RIS.Comment: submitted to at IEEE journa

    A Tutorial on Nonorthogonal Multiple Access for 5G and Beyond

    Full text link
    Today's wireless networks allocate radio resources to users based on the orthogonal multiple access (OMA) principle. However, as the number of users increases, OMA based approaches may not meet the stringent emerging requirements including very high spectral efficiency, very low latency, and massive device connectivity. Nonorthogonal multiple access (NOMA) principle emerges as a solution to improve the spectral efficiency while allowing some degree of multiple access interference at receivers. In this tutorial style paper, we target providing a unified model for NOMA, including uplink and downlink transmissions, along with the extensions tomultiple inputmultiple output and cooperative communication scenarios. Through numerical examples, we compare the performances of OMA and NOMA networks. Implementation aspects and open issues are also detailed.Comment: 25 pages, 10 figure

    Power Allocation in Uplink NOMA-Aided Massive MIMO Systems

    Get PDF
    In the development of the fifth-generation (5G) as well as the vision for the future generations of wireless communications networks, massive multiple-input multiple-output (MIMO) technology has played an increasingly important role as a key enabler to meet the growing demand for very high data throughput. By equipping base stations (BSs) with hundreds to thousands antennas, the massive MIMO technology is capable of simultaneously serving multiple users in the same time-frequency resources with simple linear signal processing in both the downlink (DL) and uplink (UL) transmissions. Thanks to the asymptotically orthogonal property of users' wireless channels, the simple linear signal processing can effectively mitigate inter-user interference and noise while boosting the desired signal's gain, and hence achieves high data throughput. In order to realize this orthogonal property in a practical system, one critical requirement in the massive MIMO technology is to have the instantaneous channel state information (CSI), which is acquired via channel estimation with pilot signaling. Unfortunately, the connection capability of a conventional massive MIMO system is strictly limited by the time resource spent for channel estimation. Attempting to serve more users beyond the limit may result in a phenomenon known as pilot contamination, which causes correlated interference, lowers signal gain and hence, severely degrades the system's performance. A natural question is ``Is it at all possible to serve more users beyond the limit of a conventional massive MIMO system?''. The main contribution of this thesis is to provide a promising solution by integrating the concept of nonorthogonal multiple access (NOMA) into a massive MIMO system. The key concept of NOMA is based on assigning each unit of orthogonal radio resources, such as frequency carriers, time slots or spreading codes, to more than one user and utilize a non-linear signal processing technique like successive interference cancellation (SIC) or dirty paper coding (DPC) to mitigate inter-user interference. In a massive MIMO system, pilot sequences are also orthogonal resources, which can be allocated with the NOMA approach. By sharing a pilot sequence to more than one user and utilizing the SIC technique, a massive MIMO system can serve more users with a fixed amount of time spent for channel estimation. However, as a consequence of pilot reuse, correlated interference becomes the main challenge that limits the spectral efficiency (SE) of a massive MIMO-NOMA system. To address this issue, this thesis focuses on how to mitigate correlated interference when combining NOMA into a massive MIMO system in order to accommodate a higher number of wireless users. In the first part, we consider the problem of SIC in a single-cell massive MIMO system in order to serve twice the number of users with the aid of time-offset pilots. With the proposed time-offset pilots, users are divided into two groups and the uplink pilots from one group are transmitted simultaneously with the uplink data of the other group, which allows the system to accommodate more users for a given number of pilots. Successive interference cancellation is developed to ease the effect of pilot contamination and enhance data detection. In the second part, the work is extended to a cell-free network, where there is no cell boundary and a user can be served by multiple base stations. The chapter focuses on the NOMA approach for sharing pilot sequences among users. Unlike the conventional cell-free massive MIMO-NOMA systems in which the UL signals from different access points are equally combined over the backhaul network, we first develop an optimal backhaul combining (OBC) method to maximize the UL signal-to-interference-plus-noise ratio (SINR). It is shown that, by using OBC, the correlated interference can be effectively mitigated if the number of users assigned to each pilot sequence is less than or equal to the number of base stations. As a result, the cell-free massive MIMO-NOMA system with OBC can enjoy unlimited performance when the number of antennas at each BS tends to infinity. Finally, we investigate the impact of imperfect SIC to a NOMA cell-free massive MIMO system. Unlike the majority of existing research works on performance evaluation of NOMA, which assume perfect channel state information and perfect data detection for SIC, we take into account the effect of practical (hence imperfect) SIC. We show that the received signal at the backhaul network of a cell-free massive MIMO-NOMA system can be effectively treated as a signal received over an additive white Gaussian noised (AWGN) channel. As a result, a discrete joint distribution between the interfering signal and its detected version can be analytically found, from which an adaptive SIC scheme is proposed to improve performance of interference cancellation

    Congestion Control for Massive Machine-Type Communications: Distributed and Learning-Based Approaches

    Get PDF
    The Internet of things (IoT) is going to shape the future of wireless communications by allowing seamless connections among wide range of everyday objects. Machine-to-machine (M2M) communication is known to be the enabling technology for the development of IoT. With M2M, the devices are allowed to interact and exchange data without or with little human intervention. Recently, M2M communication, also referred to as machine-type communication (MTC), has received increased attention due to its potential to support diverse applications including eHealth, industrial automation, intelligent transportation systems, and smart grids. M2M communication is known to have specific features and requirements that differ from that of the traditional human-to-human (H2H) communication. As specified by the Third Generation Partnership Project (3GPP), MTC devices are inexpensive, low power, and mostly low mobility devices. Furthermore, MTC devices are usually characterized by infrequent, small amount of data, and mainly uplink traffic. Most importantly, the number of MTC devices is expected to highly surpass that of H2H devices. Smart cities are an example of such a mass-scale deployment. These features impose various challenges related to efficient energy management, enhanced coverage and diverse quality of service (QoS) provisioning, among others. The diverse applications of M2M are going to lead to exponential growth in M2M traffic. Associating with M2M deployment, a massive number of devices are expected to access the wireless network concurrently. Hence, a network congestion is likely to occur. Cellular networks have been recognized as excellent candidates for M2M support. Indeed, cellular networks are mature, well-established networks with ubiquitous coverage and reliability which allows cost-effective deployment of M2M communications. However, cellular networks were originally designed for human-centric services with high-cost devices and ever-increasing rate requirements. Additionally, the conventional random access (RA) mechanism used in Long Term Evolution-Advanced (LTE-A) networks lacks the capability of handling such an enormous number of access attempts expected from massive MTC. Particularly, this RA technique acts as a performance bottleneck due to the frequent collisions that lead to excessive delay and resource wastage. Also, the lengthy handshaking process of the conventional RA technique results in highly expensive signaling, specifically for M2M devices with small payloads. Therefore, designing an efficient medium access schemes is critical for the survival of M2M networks. In this thesis, we study the uplink access of M2M devices with a focus on overload control and congestion handling. In this regard, we mainly provide two different access techniques keeping in mind the distinct features and requirements of MTC including massive connectivity, latency reduction, and energy management. In fact, full information gathering is known to be impractical for such massive networks of tremendous number of devices. Hence, we assure to preserve the low complexity, and limited information exchange among different network entities by introducing distributed techniques. Furthermore, machine learning is also employed to enhance the performance with no or limited information exchange at the decision maker. The proposed techniques are assessed via extensive simulations as well as rigorous analytical frameworks. First, we propose an efficient distributed overload control algorithm for M2M with massive access, referred to as M2M-OSA. The proposed algorithm can efficiently allocate the available network resources to massive number of devices within relatively small, and bounded contention time and with reduced overhead. By resolving collisions, the proposed algorithm is capable of achieving full resources utilization along with reduced average access delay and energy saving. For Beta-distributed traffic, we provide analytical evaluation for the performance of the proposed algorithm in terms of the access delay, total service time, energy consumption, and blocking probability. This performance assessment accounted for various scenarios including slightly, and seriously congested cases, in addition to finite and infinite retransmission limits for the devices. Moreover, we provide a discussion of the non-ideal situations that could be encountered in real-life deployment of the proposed algorithm supported by possible solutions. For further energy saving, we introduced a modified version of M2M-OSA with traffic regulation mechanism. In the second part of the thesis, we adopt a promising alternative for the conventional random access mechanism, namely fast uplink grant. Fast uplink grant was first proposed by the 3GPP for latency reduction where it allows the base station (BS) to directly schedule the MTC devices (MTDs) without receiving any scheduling requests. In our work, to handle the major challenges associated to fast uplink grant namely, active set prediction and optimal scheduling, both non-orthogonal multiple access (NOMA) and learning techniques are utilized. Particularly, we propose a two-stage NOMA-based fast uplink grant scheme that first employs multi-armed bandit (MAB) learning to schedule the fast grant devices with no prior information about their QoS requirements or channel conditions at the BS. Afterwards, NOMA facilitates the grant sharing where pairing is done in a distributed manner to reduce signaling overhead. In the proposed scheme, NOMA plays a major role in decoupling the two major challenges of fast grant schemes by permitting pairing with only active MTDs. Consequently, the wastage of the resources due to traffic prediction errors can be significantly reduced. We devise an abstraction model for the source traffic predictor needed for fast grant such that the prediction error can be evaluated. Accordingly, the performance of the proposed scheme is analyzed in terms of average resource wastage, and outage probability. The simulation results show the effectiveness of the proposed method in saving the scarce resources while verifying the analysis accuracy. In addition, the ability of the proposed scheme to pick quality MTDs with strict latency is depicted

    Enabling Technologies for Ultra-Reliable and Low Latency Communications: From PHY and MAC Layer Perspectives

    Full text link
    © 1998-2012 IEEE. Future 5th generation networks are expected to enable three key services-enhanced mobile broadband, massive machine type communications and ultra-reliable and low latency communications (URLLC). As per the 3rd generation partnership project URLLC requirements, it is expected that the reliability of one transmission of a 32 byte packet will be at least 99.999% and the latency will be at most 1 ms. This unprecedented level of reliability and latency will yield various new applications, such as smart grids, industrial automation and intelligent transport systems. In this survey we present potential future URLLC applications, and summarize the corresponding reliability and latency requirements. We provide a comprehensive discussion on physical (PHY) and medium access control (MAC) layer techniques that enable URLLC, addressing both licensed and unlicensed bands. This paper evaluates the relevant PHY and MAC techniques for their ability to improve the reliability and reduce the latency. We identify that enabling long-term evolution to coexist in the unlicensed spectrum is also a potential enabler of URLLC in the unlicensed band, and provide numerical evaluations. Lastly, this paper discusses the potential future research directions and challenges in achieving the URLLC requirements

    Energy-efficient resource allocation in limited fronthaul capacity cloud-radio access networks

    Get PDF
    In recent years, cloud radio access networks (C-RANs) have demonstrated their role as a formidable technology candidate to address the challenging issues from the advent of Fifth Generation (5G) mobile networks. In C-RANs, the modules which are capable of processing data and handling radio signals are physically separated in two main functional groups: the baseband unit (BBU) pool consisting of multiple BBUs on the cloud, and the radio access networks (RANs) consisting of several low-power remote radio heads (RRH) whose functionality are simplified with radio transmission/reception. Thanks to the centralized computation capability of cloud computing, C-RANs enable the coordination between RRHs to significantly improve the achievable spectral efficiency to satisfy the explosive traffic demand from users. More importantly, this enhanced performance can be attained at its power-saving mode, which results in the energy-efficient C-RAN perspective. Note that such improvement can be achieved under an ideal fronthaul condition of very high and stable capacity. However, in practice, dedicated fronthaul links must remarkably be divided to connect a large amount of RRHs to the cloud, leading to a scenario of non-ideal limited fronthaul capacity for each RRH. This imposes a certain upper-bound on each user’s spectral efficiency, which limits the promising achievement of C-RANs. To fully harness the energy-efficient C-RANs while respecting their stringent limited fronthaul capacity characteristics, a more appropriate and efficient network design is essential. The main scope of this thesis aims at optimizing the green performance of C-RANs in terms of energy-efficiency under the non-ideal fronthaul capacity condition, namely energy-efficient design in limited fronthaul capacity C-RANs. Our study, via jointly determining the transmit beamforming, RRH selection, and RRH–user association, targets the following three vital design issues: the optimal trade-off between maximizing achievable sum rate and minimizing total power consumption, the maximum energy-efficiency under adaptive rate-dependent power model, the optimal joint energy-efficient design of virtual computing along with the radio resource allocation in virtualized C-RANs. The significant contributions and novelties of this work can be elaborated in the followings. Firstly, the joint design of transmit beamforming, RRH selection, and RRH–user association to optimize the trade-off between user sum rate maximization and total power consumption minimization in the downlink transmissions of C-RANs is presented in Chapter 3. We develop one powerful with high-complexity and two novel efficient low-complexity algorithms to respectively solve for a global optimal and high-quality sub-optimal solutions. The findings in this chapter show that the proposed algorithms, besides overcoming the burden to solve difficult non-convex problems within a polynomial time, also outperform the techniques in the literature in terms of convergence and achieved network performance. Secondly, Chapter 4 proposes a novel model reflecting the dependence of consumed power on the user data rate and highlights its impact through various energy-efficiency metrics in CRANs. The dominant performance of the results form Chapter 4, compared to the conventional work without adaptive rate-dependent power model, corroborates the importance of the newly proposed model in appropriately conserving the system power to achieve the most energy efficient C-RAN performance. Finally, we propose a novel model on the cloud center which enables the virtualization and adaptive allocation of computing resources according to the data traffic demand to conserve more power in Chapter 5. A problem of jointly designing the virtual computing resource together with the beamforming, RRH selection, and RRH–user association which maximizes the virtualized C-RAN energy-efficiency is considered. To cope with the huge size of the formulated optimization problem, a novel efficient with much lower-complexity algorithm compared to previous work is developed to achieve the solution. The achieved results from different evaluations demonstrate the superiority of the proposed designs compared to the conventional work
    corecore