25 research outputs found

    Recovery Failure Probability of Power-based NOMA on the Uplink of a 5G Cell for an Arbitrary Number of Superimposed Signals

    Get PDF
    This work puts forth an analytical approach to evaluate the recovery failure probability of power-based NOMA on the uplink of a 5G cell, the recovery failure being defined as the unfortunate event where the receiver is unable to decode even one out of the n simultaneously received signals. In the examined scenario, Successive Interference Cancellation (SIC) is considered and an arbitrary number of superimposed signals is present. For the Rayleigh fading case, the recovery failure probability is provided in closed-form, clearly outlining its dependency on the signal-to-noise ratio of the users that are simultaneously transmitting, as well as on their distance from the receiver

    Compressive Sensing-Based Grant-Free Massive Access for 6G Massive Communication

    Full text link
    The advent of the sixth-generation (6G) of wireless communications has given rise to the necessity to connect vast quantities of heterogeneous wireless devices, which requires advanced system capabilities far beyond existing network architectures. In particular, such massive communication has been recognized as a prime driver that can empower the 6G vision of future ubiquitous connectivity, supporting Internet of Human-Machine-Things for which massive access is critical. This paper surveys the most recent advances toward massive access in both academic and industry communities, focusing primarily on the promising compressive sensing-based grant-free massive access paradigm. We first specify the limitations of existing random access schemes and reveal that the practical implementation of massive communication relies on a dramatically different random access paradigm from the current ones mainly designed for human-centric communications. Then, a compressive sensing-based grant-free massive access roadmap is presented, where the evolutions from single-antenna to large-scale antenna array-based base stations, from single-station to cooperative massive multiple-input multiple-output systems, and from unsourced to sourced random access scenarios are detailed. Finally, we discuss the key challenges and open issues to shed light on the potential future research directions of grant-free massive access.Comment: Accepted by IEEE IoT Journa

    Precoded Chebyshev-NLMS based pre-distorter for nonlinear LED compensation in NOMA-VLC

    Get PDF
    Visible light communication (VLC) is one of the main technologies driving the future 5G communication systems due to its ability to support high data rates with low power consumption, thereby facilitating high speed green communications. To further increase the capacity of VLC systems, a technique called non-orthogonal multiple access (NOMA) has been suggested to cater to increasing demand for bandwidth, whereby users' signals are superimposed prior to transmission and detected at each user equipment using successive interference cancellation (SIC). Some recent results on NOMA exist which greatly enhance the achievable capacity as compared to orthogonal multiple access techniques. However, one of the performance-limiting factors affecting VLC systems is the nonlinear characteristics of a light emitting diode (LED). This paper considers the nonlinear LED characteristics in the design of pre-distorter for cognitive radio inspired NOMA in VLC, and proposes singular value decomposition based Chebyshev precoding to improve performance of nonlinear multiple-input multiple output NOMA-VLC. A novel and generalized power allocation strategy is also derived in this work, which is valid even in scenarios when users experience similar channels. Additionally, in this work, analytical upper bounds for the bit error rate of the proposed detector are derived for square MM-quadrature amplitude modulation.Comment: R. Mitra and V. Bhatia are with Indian Institute of Technology Indore, Indore-453552, India, Email:[email protected], [email protected]. This work was submitted to IEEE Transactions on Communications on October 26, 2016, decisioned on March 3, 2017, and revised on April 25, 2017, and is currently under review in IEEE Transactions on Communication

    TRANSMISSION PERFORMANCE OPTIMIZATION IN FIBER-WIRELESS ACCESS NETWORKS USING MACHINE LEARNING TECHNIQUES

    Get PDF
    The objective of this dissertation is to enhance the transmission performance in the fiber-wireless access network through mitigating the vital system limitations of both analog radio over fiber (A-RoF) and digital radio over fiber (D-RoF), with machine learning techniques being systematically implemented. The first thrust is improving the spectral efficiency for the optical transmission in the D-RoF to support the delivery of the massive number of bits from digitized radio signals. Advanced digital modulation schemes like PAM8, discrete multi-tone (DMT), and probabilistic shaping are investigated and implemented, while they may introduce severe nonlinear impairments on the low-cost optical intensity-modulation-direct-detection (IMDD) based D-RoF link with a limited dynamic range. An efficient deep neural network (DNN) equalizer/decoder to mitigate the nonlinear degradation is therefore designed and experimentally verified. Besides, we design a neural network based digital predistortion (DPD) to mitigate the nonlinear impairments from the whole link, which can be integrated into a transmitter with more processing resources and power than a receiver in an access network. Another thrust is to proactively mitigate the complex interferences in radio access networks (RANs). The composition of signals from different licensed systems and unlicensed transmitters creates an unprecedently complex interference environment that cannot be solved by conventional pre-defined network planning. In response to the challenges, a proactive interference avoidance scheme using reinforcement learning is proposed and experimentally verified in a mmWave-over-fiber platform. Except for the external sources, the interference may arise internally from a local transmitter as the self-interference (SI) that occupies the same time and frequency block as the signal of interest (SOI). Different from the conventional subtraction-based SI cancellation scheme, we design an efficient dual-inputs DNN (DI-DNN) based canceller which simultaneously cancels the SI and recovers the SOI.Ph.D

    Achievable Diversity Order of HARQ-Aided Downlink NOMA Systems

    Full text link
    The combination between non-orthogonal multiple access (NOMA) and hybrid automatic repeat request (HARQ) is capable of realizing ultra-reliability, high throughput and many concurrent connections particularly for emerging communication systems. This paper focuses on characterizing the asymptotic scaling law of the outage probability of HARQ-aided NOMA systems with respect to the transmit power, i.e., diversity order. The analysis of diversity order is carried out for three basic types of HARQ-aided downlink NOMA systems, including Type I HARQ, HARQ with chase combining (HARQ-CC) and HARQ with incremental redundancy (HARQ-IR). The diversity orders of three HARQ-aided downlink NOMA systems are derived in closed-form, where an integration domain partition trick is developed to obtain the bounds of the outage probability specially for HARQ-CC and HARQ-IR-aided NOMA systems. The analytical results show that the diversity order is a decreasing step function of transmission rate, and full time diversity can only be achieved under a sufficiently low transmission rate. It is also revealed that HARQ-IR-aided NOMA systems have the largest diversity order, followed by HARQ-CC-aided and then Type I HARQ-aided NOMA systems. Additionally, the users' diversity orders follow a descending order according to their respective average channel gains. Furthermore, we expand discussions on the cases of power-efficient transmissions and imperfect channel state information (CSI). Monte Carlo simulations finally confirm our analysis

    Congestion Control for Massive Machine-Type Communications: Distributed and Learning-Based Approaches

    Get PDF
    The Internet of things (IoT) is going to shape the future of wireless communications by allowing seamless connections among wide range of everyday objects. Machine-to-machine (M2M) communication is known to be the enabling technology for the development of IoT. With M2M, the devices are allowed to interact and exchange data without or with little human intervention. Recently, M2M communication, also referred to as machine-type communication (MTC), has received increased attention due to its potential to support diverse applications including eHealth, industrial automation, intelligent transportation systems, and smart grids. M2M communication is known to have specific features and requirements that differ from that of the traditional human-to-human (H2H) communication. As specified by the Third Generation Partnership Project (3GPP), MTC devices are inexpensive, low power, and mostly low mobility devices. Furthermore, MTC devices are usually characterized by infrequent, small amount of data, and mainly uplink traffic. Most importantly, the number of MTC devices is expected to highly surpass that of H2H devices. Smart cities are an example of such a mass-scale deployment. These features impose various challenges related to efficient energy management, enhanced coverage and diverse quality of service (QoS) provisioning, among others. The diverse applications of M2M are going to lead to exponential growth in M2M traffic. Associating with M2M deployment, a massive number of devices are expected to access the wireless network concurrently. Hence, a network congestion is likely to occur. Cellular networks have been recognized as excellent candidates for M2M support. Indeed, cellular networks are mature, well-established networks with ubiquitous coverage and reliability which allows cost-effective deployment of M2M communications. However, cellular networks were originally designed for human-centric services with high-cost devices and ever-increasing rate requirements. Additionally, the conventional random access (RA) mechanism used in Long Term Evolution-Advanced (LTE-A) networks lacks the capability of handling such an enormous number of access attempts expected from massive MTC. Particularly, this RA technique acts as a performance bottleneck due to the frequent collisions that lead to excessive delay and resource wastage. Also, the lengthy handshaking process of the conventional RA technique results in highly expensive signaling, specifically for M2M devices with small payloads. Therefore, designing an efficient medium access schemes is critical for the survival of M2M networks. In this thesis, we study the uplink access of M2M devices with a focus on overload control and congestion handling. In this regard, we mainly provide two different access techniques keeping in mind the distinct features and requirements of MTC including massive connectivity, latency reduction, and energy management. In fact, full information gathering is known to be impractical for such massive networks of tremendous number of devices. Hence, we assure to preserve the low complexity, and limited information exchange among different network entities by introducing distributed techniques. Furthermore, machine learning is also employed to enhance the performance with no or limited information exchange at the decision maker. The proposed techniques are assessed via extensive simulations as well as rigorous analytical frameworks. First, we propose an efficient distributed overload control algorithm for M2M with massive access, referred to as M2M-OSA. The proposed algorithm can efficiently allocate the available network resources to massive number of devices within relatively small, and bounded contention time and with reduced overhead. By resolving collisions, the proposed algorithm is capable of achieving full resources utilization along with reduced average access delay and energy saving. For Beta-distributed traffic, we provide analytical evaluation for the performance of the proposed algorithm in terms of the access delay, total service time, energy consumption, and blocking probability. This performance assessment accounted for various scenarios including slightly, and seriously congested cases, in addition to finite and infinite retransmission limits for the devices. Moreover, we provide a discussion of the non-ideal situations that could be encountered in real-life deployment of the proposed algorithm supported by possible solutions. For further energy saving, we introduced a modified version of M2M-OSA with traffic regulation mechanism. In the second part of the thesis, we adopt a promising alternative for the conventional random access mechanism, namely fast uplink grant. Fast uplink grant was first proposed by the 3GPP for latency reduction where it allows the base station (BS) to directly schedule the MTC devices (MTDs) without receiving any scheduling requests. In our work, to handle the major challenges associated to fast uplink grant namely, active set prediction and optimal scheduling, both non-orthogonal multiple access (NOMA) and learning techniques are utilized. Particularly, we propose a two-stage NOMA-based fast uplink grant scheme that first employs multi-armed bandit (MAB) learning to schedule the fast grant devices with no prior information about their QoS requirements or channel conditions at the BS. Afterwards, NOMA facilitates the grant sharing where pairing is done in a distributed manner to reduce signaling overhead. In the proposed scheme, NOMA plays a major role in decoupling the two major challenges of fast grant schemes by permitting pairing with only active MTDs. Consequently, the wastage of the resources due to traffic prediction errors can be significantly reduced. We devise an abstraction model for the source traffic predictor needed for fast grant such that the prediction error can be evaluated. Accordingly, the performance of the proposed scheme is analyzed in terms of average resource wastage, and outage probability. The simulation results show the effectiveness of the proposed method in saving the scarce resources while verifying the analysis accuracy. In addition, the ability of the proposed scheme to pick quality MTDs with strict latency is depicted

    Internet of Things and Sensors Networks in 5G Wireless Communications

    Get PDF
    This book is a printed edition of the Special Issue Internet of Things and Sensors Networks in 5G Wireless Communications that was published in Sensors

    Internet of Things and Sensors Networks in 5G Wireless Communications

    Get PDF
    The Internet of Things (IoT) has attracted much attention from society, industry and academia as a promising technology that can enhance day to day activities, and the creation of new business models, products and services, and serve as a broad source of research topics and ideas. A future digital society is envisioned, composed of numerous wireless connected sensors and devices. Driven by huge demand, the massive IoT (mIoT) or massive machine type communication (mMTC) has been identified as one of the three main communication scenarios for 5G. In addition to connectivity, computing and storage and data management are also long-standing issues for low-cost devices and sensors. The book is a collection of outstanding technical research and industrial papers covering new research results, with a wide range of features within the 5G-and-beyond framework. It provides a range of discussions of the major research challenges and achievements within this topic

    Internet of Things and Sensors Networks in 5G Wireless Communications

    Get PDF
    This book is a printed edition of the Special Issue Internet of Things and Sensors Networks in 5G Wireless Communications that was published in Sensors
    corecore