8 research outputs found

    A Novel Rate Improvement Technique of Power Domain NOMA in Wireless 5G

    Get PDF
    902-904Error performance (EP) and capacity improvement (CI) are two important aspects essentially addressed using NOMA in the future cellular networks. In a dense user environment, transmit antenna selection NOMA (TAS-NOMA) algorithm if often employed to save the power. Apart from this antenna selection scheme, it is significant to manage sum rate calculation (i.e., spatial diversity) and delay (latency). In this paper, we propose ensemble average sum rate (EASR) algorithm which is capable of controlling the sum rate while minimizing the latency. However, it is significant to note that the system suffers non-linearity when the number of antennas enhanced in the transmitting side. The proposed technique produced better results in terms of enhanced power allocation than the conventional selection algorithm

    A Novel Rate Improvement Technique of Power Domain NOMA in Wireless 5G

    Get PDF
    Error performance (EP) and capacity improvement (CI) are two important aspects essentially addressed using NOMA in the future cellular networks. In a dense user environment, transmit antenna selection NOMA (TAS-NOMA) algorithm if often employed to save the power. Apart from this antenna selection scheme, it is significant to manage sum rate calculation (i.e., spatial diversity) and delay (latency). In this paper, we propose ensemble average sum rate (EASR) algorithm which is capable of controlling the sum rate while minimizing the latency. However, it is significant to note that the system suffers non-linearity when the number of antennas enhanced in the transmitting side. The proposed technique produced better results in terms of enhanced power allocation than the conventional selection algorithm

    Analysis of MAC-level throughput in LTE systems with link rate adaptation and HARQ protocols

    Get PDF
    LTE is rapidly gaining momentum for building future 4G cellular systems, and real operational networks are under deployment worldwide. To achieve high throughput performance, in addition to an advanced physical layer design LTE exploits a combination of sophisticated mechanisms at the radio resource management layer. Clearly, this makes difficult to develop analytical tools to accurately assess and optimise the user perceived throughput under realistic channel assumptions. Thus, most existing studies focus only on link-layer throughput or consider individual mechanisms in isolation. The main contribution of this paper is a unified modelling framework of the MAC-level downlink throughput of a sigle LTE cell, which caters for wideband CQI feedback schemes, AMC and HARQ protocols as defined in the LTE standard. We have validated the accuracy of the proposed model through detailed LTE simulations carried out with the ns-3 simulator extended with the LENA module for LTE

    Studies on efficient spectrum sharing in coexisting wireless networks.

    Get PDF
    Wireless communication is facing serious challenges worldwide: the severe spectrum shortage along with the explosive increase of the wireless communication demands. Moreover, different communication networks may coexist in the same geographical area. By allowing multiple communication networks cooperatively or opportunistically sharing the same frequency will potentially enhance the spectrum efficiency. This dissertation aims to investigate important spectrum sharing schemes for coexisting networks. For coexisting networks operating in interweave cognitive radio mode, most existing works focus on the secondary network’s spectrum sensing and accessing schemes. However, the primary network can be selfish and tends to use up all the frequency resource. In this dissertation, a novel optimization scheme is proposed to let primary network maximally release unnecessary frequency resource for secondary networks. The optimization problems are formulated for both uplink and downlink orthogonal frequency-division multiple access (OFDMA)-based primary networks, and near optimal algorithms are proposed as well. For coexisting networks in the underlay cognitive radio mode, this work focuses on the resource allocation in distributed secondary networks as long as the primary network’s rate constraint can be met. Global optimal multicarrier discrete distributed (MCDD) algorithm and suboptimal Gibbs sampler based Lagrangian algorithm (GSLA) are proposed to solve the problem distributively. Regarding to the dirty paper coding (DPC)-based system where multiple networks share the common transmitter, this dissertation focuses on its fundamental performance analysis from information theoretic point of view. Time division multiple access (TDMA) as an orthogonal frequency sharing scheme is also investigated for comparison purpose. Specifically, the delay sensitive quality of service (QoS) requirements are incorporated by considering effective capacity in fast fading and outage capacity in slow fading. The performance metrics in low signal to noise ratio (SNR) regime and high SNR regime are obtained in closed forms followed by the detailed performance analysis

    Machine Learning Empowered Resource Allocation for NOMA Enabled IoT Networks

    Get PDF
    The Internet of things (IoT) is one of the main use cases of ultra massive machine type communications (umMTC), which aims to connect large-scale short packet sensors or devices in sixth-generation (6G) systems. This rapid increase in connected devices requires efficient utilization of limited spectrum resources. To this end, non-orthogonal multiple access (NOMA) is considered a promising solution due to its potential for massive connectivity over the same time/frequency resource block (RB). The IoT users’ have the characteristics of different features such as sporadic transmission, high battery life cycle, minimum data rate requirements, and different QoS requirements. Therefore, keeping in view these characteristics, it is necessary for IoT networks with NOMA to allocate resources more appropriately and efficiently. Moreover, due to the absence of 1) learning capabilities, 2) scalability, 3) low complexity, and 4) long-term resource optimization, conventional optimization approaches are not suitable for IoT networks with time-varying communication channels and dynamic network access. This thesis provides machine learning (ML) based resource allocation methods to optimize the long-term resources for IoT users according to their characteristics and dynamic environment. First, we design a tractable framework based on model-free reinforcement learning (RL) for downlink NOMA IoT networks to allocate resources dynamically. More specifically, we use actor critic deep reinforcement learning (ACDRL) to improve the sum rate of IoT users. This model can optimize the resource allocation for different users in a dynamic and multi-cell scenario. The state space in the proposed framework is based on the three-dimensional association among multiple IoT users, multiple base stations (BSs), and multiple sub-channels. In order to find the optimal resources solution for the maximization of sum rate problem in network and explore the dynamic environment better, this work utilizes the instantaneous data rate as a reward. The proposed ACDRL algorithm is scalable and handles different network loads. The proposed ACDRL-D and ACDRL-C algorithms outperform DRL and RL in terms of convergence speed and data rate by 23.5\% and 30.3\%, respectively. Additionally, the proposed scheme provides better sum rate as compare to orthogonal multiple access (OMA). Second, similar to sum rate maximization problem, energy efficiency (EE) is a key problem, especially for applications where battery replacement is costly or difficult to replace. For example, the sensors with different QoS requirements are deployed in radioactive areas, hidden in walls, and in pressurized pipes. Therefore, for such scenarios, energy cooperation schemes are required. To maximize the EE of different IoT users, i.e., grant-free (GF) and grant-based (GB) in the network with uplink NOMA, we propose an RL based semi-centralized optimization framework. In particular, this work applied proximal policy optimization (PPO) algorithm for GB users and to optimize the EE for GF users, a multi-agent deep Q-network where used with the aid of a relay node. Numerical results demonstrate that the suggested algorithm increases the EE of GB users compared to random and fixed power allocations methods. Moreover, results shows superiority in the EE of GF users over the benchmark scheme (convex optimization). Furthermore, we show that the increase in the number of GB users has a strong correlation with the EE of both types of users. Third, we develop an efficient model-free backscatter communication (BAC) approach with simultaneously downlink and uplink NOMA system to jointly optimize the transmit power of downlink IoT users and the reflection coefficient of uplink backscatter devices using a reinforcement learning algorithm, namely, soft actor critic (SAC). With the advantage of entropy regularization, the SAC agent learns to explore and exploit the dynamic BAC-NOMA network efficiently. Numerical results unveil the superiority of the proposed algorithm over the conventional optimization approach in terms of the average sum rate of uplink backscatter devices. We show that the network with multiple downlink users obtained a higher reward for a large number of iterations. Moreover, the proposed algorithm outperforms the benchmark scheme and BAC with OMA in terms of sum rate, self-interference coefficients, noise levels, QoS requirements, and cell radii

    High capacity multiuser multiantenna communication techniques

    Get PDF
    One of the main issues involved in the development of future wireless communication systems is the multiple access technique used to efficiently share the available spectrum among users. In rich multipath environment, spatial dimension can be exploited to meet the increasing number of users and their demands without consuming extra bandwidth and power. Therefore, it is utilized in the multiple-input multiple-output (MIMO) technology to increase the spectral efficiency significantly. However, multiuser MIMO (MU-MIMO) systems are still challenging to be widely adopted in next generation standards. In this thesis, new techniques are proposed to increase the channel and user capacity and improve the error performance of MU-MIMO over Rayleigh fading channel environment. For realistic system design and performance evaluation, channel correlation is considered as one of the main channel impurities due its severe influence on capacity and reliability. Two simple methods called generalized successive coloring technique (GSCT) and generalized iterative coloring technique (GICT) are proposed for accurate generation of correlated Rayleigh fading channels (CRFC). They are designed to overcome the shortcomings of existing methods by avoiding factorization of desired covariance matrix of the Gaussian samples. The superiority of these techniques is demonstrated by extensive simulations of different practical system scenarios. To mitigate the effects of channel correlations, a novel constellation constrained MU-MIMO (CC-MU-MIMO) scheme is proposed using transmit signal design and maximum likelihood joint detection (MLJD) at the receiver. It is designed to maximize the channel capacity and error performance based on principles of maximizing the minimum Euclidean distance (dmin) of composite received signals. Two signal design methods named as unequal power allocation (UPA) and rotation constellation (RC) are utilized to resolve the detection ambiguity caused by correlation. Extensive analysis and simulations demonstrate the effectiveness of considered scheme compared with conventional MU-MIMO. Furthermore, significant gain in SNR is achieved particularly in moderate to high correlations which have direct impact to maintain high user capacity. A new efficient receive antenna selection (RAS) technique referred to as phase difference based selection (PDBS) is proposed for single and multiuser MIMO systems to maximize the capacity over CRFC. It utilizes the received signal constellation to select the subset of antennas with highest (dmin) constellations due to its direct impact on the capacity and BER performance. A low complexity algorithm is designed by employing the Euclidean norm of channel matrix rows with their corresponding phase differences. Capacity analysis and simulation results show that PDBS outperforms norm based selection (NBS) and near to optimal selection (OS) for all correlation and SNR values. This technique provides fast RAS to capture most of the gains promised by multiantenna systems over different channel conditions. Finally, novel group layered MU-MIMO (GL-MU-MIMO) scheme is introduced to exploit the available spectrum for higher user capacity with affordable complexity. It takes the advantages of spatial difference among users and power control at base station to increase the number of users beyond the available number of RF chains. It is achieved by dividing the users into two groups according to their received power, high power group (HPG) and low power group (LPG). Different configurations of low complexity group layered multiuser detection (GL-MUD) and group power allocation ratio (η) are utilized to provide a valuable tradeoff between complexity and overall system performance. Furthermore, RAS diversity is incorporated by using NBS and a new selection algorithm called HPG-PDBS to increase the channel capacity and enhance the error performance. Extensive analysis and simulations demonstrate the superiority of proposed scheme compared with conventional MU-MIMO. By using appropriate value of (η), it shows higher sum rate capacity and substantial increase in the user capacity up to two-fold at target BER and SNR values

    Opportunistic traffic Offloadings Mechanisms for Mobile/4G Networks

    Get PDF
    In the last few years, it has been observed a drastic surge of data traffic demand from mobile personal devices (smartphones and tablets) over cellular networks [1]. Even though a significant improvement in cellular bandwidth provisioning is expected with LTE-Advanced systems, the overall situation is not expected to change significantly. In fact, the diffusion of M2M and IoT devices is expected to increase at an exponential pace (the share of M2M devices is predicted to increase 5x by 2018 [1]) while the capacity of the cellular network is expected to increase linearly [1]. In order to meet such a high demand and to increase the capacity of the channel, multiple offloading techniques are currently under investigation, from modifications inside the cellular network architecture, to integration of multiple wireless broadband infrastructures, to exploiting direct communications between mobile devices. All these approaches can be diveded in two main classes: - To develop more sophisticated physical layer technologies (e.g. massive MIMO, higher-order modulation schemes, cooperative multi-period transmission/reception) - To offload part of the traffic from the cellular to another complementary network. From this perspective the thesis contributes on both areas. On the one hand we discuss our investigations about the performance of the LTE channel capacity through the development of a unified modelling framework of the MAC-level downlink throughput of a sigle LTE cell, which caters for wideband CQI feedback schemes, AMC and HARQ protocols as defined in the LTE standard. Furthemore we also propose a solution, based on reinforcement learning, to improve the LTE Adaptive Modulation and coding Scheme (MCS). On the other hand we have proposed and validated offloading mechanisms which are minimally invasive for users' mobile devices, as they use only minimally their resources. Furthemore, as opposed to most of the literature, we consider the case where requests for content are non-synchronised, i.e. users request content at random points in time

    Potentzia domeinuko NOMA 5G sareetarako eta haratago

    Get PDF
    Tesis inglés 268 p. -- Tesis euskera 274 p.During the last decade, the amount of data carried over wireless networks has grown exponentially. Several reasons have led to this situation, but the most influential ones are the massive deployment of devices connected to the network and the constant evolution in the services offered. In this context, 5G targets the correct implementation of every application integrated into the use cases. Nevertheless, the biggest challenge to make ITU-R defined cases (eMBB, URLLC and mMTC) a reality is the improvement in spectral efficiency. Therefore, in this thesis, a combination of two mechanisms is proposed to improve spectral efficiency: Non-Orthogonal Multiple Access (NOMA) techniques and Radio Resource Management (RRM) schemes. Specifically, NOMA transmits simultaneously several layered data flows so that the whole bandwidth is used throughout the entire time to deliver more than one service simultaneously. Then, RRM schemes provide efficient management and distribution of radio resources among network users. Although NOMA techniques and RRM schemes can be very advantageous in all use cases, this thesis focuses on making contributions in eMBB and URLLC environments and proposing solutions to communications that are expected to be relevant in 6G
    corecore