8 research outputs found

    Spectrum Sharing, Latency, and Security in 5G Networks with Application to IoT and Smart Grid

    Get PDF
    The surge of mobile devices, such as smartphones, and tables, demands additional capacity. On the other hand, Internet-of-Things (IoT) and smart grid, which connects numerous sensors, devices, and machines require ubiquitous connectivity and data security. Additionally, some use cases, such as automated manufacturing process, automated transportation, and smart grid, require latency as low as 1 ms, and reliability as high as 99.99\%. To enhance throughput and support massive connectivity, sharing of the unlicensed spectrum (3.5 GHz, 5GHz, and mmWave) is a potential solution. On the other hand, to address the latency, drastic changes in the network architecture is required. The fifth generation (5G) cellular networks will embrace the spectrum sharing and network architecture modifications to address the throughput enhancement, massive connectivity, and low latency. To utilize the unlicensed spectrum, we propose a fixed duty cycle based coexistence of LTE and WiFi, in which the duty cycle of LTE transmission can be adjusted based on the amount of data. In the second approach, a multi-arm bandit learning based coexistence of LTE and WiFi has been developed. The duty cycle of transmission and downlink power are adapted through the exploration and exploitation. This approach improves the aggregated capacity by 33\%, along with cell edge and energy efficiency enhancement. We also investigate the performance of LTE and ZigBee coexistence using smart grid as a scenario. In case of low latency, we summarize the existing works into three domains in the context of 5G networks: core, radio and caching networks. Along with this, fundamental constraints for achieving low latency are identified followed by a general overview of exemplary 5G networks. Besides that, a loop-free, low latency and local-decision based routing protocol is derived in the context of smart grid. This approach ensures low latency and reliable data communication for stationary devices. To address data security in wireless communication, we introduce a geo-location based data encryption, along with node authentication by k-nearest neighbor algorithm. In the second approach, node authentication by the support vector machine, along with public-private key management, is proposed. Both approaches ensure data security without increasing the packet overhead compared to the existing approaches

    A Tutorial on Nonorthogonal Multiple Access for 5G and Beyond

    Full text link
    Today's wireless networks allocate radio resources to users based on the orthogonal multiple access (OMA) principle. However, as the number of users increases, OMA based approaches may not meet the stringent emerging requirements including very high spectral efficiency, very low latency, and massive device connectivity. Nonorthogonal multiple access (NOMA) principle emerges as a solution to improve the spectral efficiency while allowing some degree of multiple access interference at receivers. In this tutorial style paper, we target providing a unified model for NOMA, including uplink and downlink transmissions, along with the extensions tomultiple inputmultiple output and cooperative communication scenarios. Through numerical examples, we compare the performances of OMA and NOMA networks. Implementation aspects and open issues are also detailed.Comment: 25 pages, 10 figure

    Multiple Access for Massive Machine Type Communications

    Get PDF
    The internet we have known thus far has been an internet of people, as it has connected people with one another. However, these connections are forecasted to occupy only a minuscule of future communications. The internet of tomorrow is indeed: the internet of things. The Internet of Things (IoT) promises to improve all aspects of life by connecting everything to everything. An enormous amount of effort is being exerted to turn these visions into a reality. Sensors and actuators will communicate and operate in an automated fashion with no or minimal human intervention. In the current literature, these sensors and actuators are referred to as machines, and the communication amongst these machines is referred to as Machine to Machine (M2M) communication or Machine-Type Communication (MTC). As IoT requires a seamless mode of communication that is available anywhere and anytime, wireless communications will be one of the key enabling technologies for IoT. In existing wireless cellular networks, users with data to transmit first need to request channel access. All access requests are processed by a central unit that in return either grants or denies the access request. Once granted access, users' data transmissions are non-overlapping and interference free. However, as the number of IoT devices is forecasted to be in the order of hundreds of millions, if not billions, in the near future, the access channels of existing cellular networks are predicted to suffer from severe congestion and, thus, incur unpredictable latencies in the system. On the other hand, in random access, users with data to transmit will access the channel in an uncoordinated and probabilistic fashion, thus, requiring little or no signalling overhead. However, this reduction in overhead is at the expense of reliability and efficiency due to the interference caused by contending users. In most existing random access schemes, packets are lost when they experience interference from other packets transmitted over the same resources. Moreover, most existing random access schemes are best-effort schemes with almost no Quality of Service (QoS) guarantees. In this thesis, we investigate the performance of different random access schemes in different settings to resolve the problem of the massive access of IoT devices with diverse QoS guarantees. First, we take a step towards re-designing existing random access protocols such that they are more practical and more efficient. For many years, researchers have adopted the collision channel model in random access schemes: a collision is the event of two or more users transmitting over the same time-frequency resources. In the event of a collision, all the involved data is lost, and users need to retransmit their information. However, in practice, data can be recovered even in the presence of interference provided that the power of the signal is sufficiently larger than the power of the noise and the power of the interference. Based on this, we re-define the event of collision as the event of the interference power exceeding a pre-determined threshold. We propose a new analytical framework to compute the probability of packet recovery failure inspired by error control codes on graph. We optimize the random access parameters based on evolution strategies. Our results show a significant improvement in performance in terms of reliability and efficiency. Next, we focus on supporting the heterogeneous IoT applications and accommodating their diverse latency and reliability requirements in a unified access scheme. We propose a multi-stage approach where each group of applications transmits in different stages with different probabilities. We propose a new analytical framework to compute the probability of packet recovery failure for each group in each stage. We also optimize the random access parameters using evolution strategies. Our results show that our proposed scheme can outperform coordinated access schemes of existing cellular networks when the number of users is very large. Finally, we investigate random non-orthogonal multiple access schemes that are known to achieve a higher spectrum efficiency and are known to support higher loads. In our proposed scheme, user detection and channel estimation are carried out via pilot sequences that are transmitted simultaneously with the user's data. Here, a collision event is defined as the event of two or more users selecting the same pilot sequence. All collisions are regarded as interference to the remaining users. We first study the distribution of the interference power and derive its expression. Then, we use this expression to derive simple yet accurate analytical bounds on the throughput and outage probability of the proposed scheme. We consider both joint decoding as well as successive interference cancellation. We show that the proposed scheme is especially useful in the case of short packet transmission

    Potentzia domeinuko NOMA 5G sareetarako eta haratago

    Get PDF
    Tesis inglés 268 p. -- Tesis euskera 274 p.During the last decade, the amount of data carried over wireless networks has grown exponentially. Several reasons have led to this situation, but the most influential ones are the massive deployment of devices connected to the network and the constant evolution in the services offered. In this context, 5G targets the correct implementation of every application integrated into the use cases. Nevertheless, the biggest challenge to make ITU-R defined cases (eMBB, URLLC and mMTC) a reality is the improvement in spectral efficiency. Therefore, in this thesis, a combination of two mechanisms is proposed to improve spectral efficiency: Non-Orthogonal Multiple Access (NOMA) techniques and Radio Resource Management (RRM) schemes. Specifically, NOMA transmits simultaneously several layered data flows so that the whole bandwidth is used throughout the entire time to deliver more than one service simultaneously. Then, RRM schemes provide efficient management and distribution of radio resources among network users. Although NOMA techniques and RRM schemes can be very advantageous in all use cases, this thesis focuses on making contributions in eMBB and URLLC environments and proposing solutions to communications that are expected to be relevant in 6G

    Learning-based communication system design – autoencoder for (differential) block coded modulation designs and path loss predictions

    Get PDF
    Shannon’s channel coding theorem states the existence of long random codes that can make the error probability arbitrarily small. Recently, advanced error-correcting codes such as turbo and low-density parity-check codes have almost reached the theoretical Shannon limit for binary additive white Gaussian noise channels. However, designing optimal high-rate short-block codes with automatic bit-labeling for various wireless networks is still an unsolved problem. Deep-learning-based autoencoders (AE) have appeared as a potential near-optimal solution for designing wireless communications systems. We take a holistic approach that jointly optimizes all the components of the communication networks by performing data-driven end-to-end learning of the neural network-based transmitter and receiver together. Specifically, to tackle the fading channels, we show that AE frameworks can perform near-optimal block coded-modulation (BCM) and differential BCM (d-BCM) designs in the presence and absence of the channel state information knowledge. Moreover, we focus on AE-based designing of high-rate short block codes with automatic bit-labeling that are capable of outperforming conventional networks with larger margins as the rate R increases. We also investigate the BCM and d-BCM from an information-theoretic perspective. With the advent of internet-of-things (IoT) networks and the widespread use of small devices, we face the challenge of limited available bandwidth. Therefore, novel techniques need to be utilized, such as full-duplex (FD) mode transmission reception at the base station for the full utilization of the spectrum, and non-orthogonal multiple access (NOMA) at the user-end for serving multiple IoT devices while fulfilling their quality-of-service requirement. Furthermore, the deployment of relay nodes will play a pivotal role in improving network coverage, reliability, and spectral efficiency for the future 5G networks. Thus, we design and develop novel end-to-end-learning-based AE frameworks for BCM and d-BCM in various scenarios such as amplify-and-forward and decode-and-forward relaying networks, FD relaying networks, and multi-user downlink networks. We focus on interpretability and understand the AE-based BCM and d-BCM from an information-theoretic perspective, such as the AE’s estimated mutual information, convergence, loss optimization, and training principles. We also determine the distinct properties of AE-based (differential) coded-modulation designs in higher-dimensional space. Moreover, we also studied the reproducibility of the trained AE framework. In contrast, large bandwidth and worldwide spectrum availability at mm-wave bands have also shown a great potential for 5G and beyond, but the high path loss (PL) and significant scattering/absorption loss make the signal propagation challenging. Highly accurate PL prediction is fundamental for mm-wave network planning and optimization, whereas existing methods such as slope-intercept models and ray tracing fall short in capturing the large street-by-street variation seen in urban cities. We also exploited the potential benefits of AE framework-based compression capabilities in mm-wave PL prediction. Specifically, we employ extensive 28 GHz measurements from Manhattan Street canyons and model the street clutters via a LiDAR point cloud dataset and 3D-buildings by a mesh-grid building dataset. We aggressively compress 3D-building shape information using convolutional-AE frameworks to reduce overfitting and propose a machine learning (ML)-based PL prediction model for mm-wave propagation.EPSRC-UKRI fundin
    corecore