9 research outputs found

    Performance of NOMA systems with HARQ-CC in finite blocklength

    Get PDF
    Abstract. With the advent of new use-cases requiring high reliability and low-latency, transmission with finite blocklength becomes inevitable to reduce latency. In contrast to classical information-theoretic principles, the use of finite blocklength results in a non-negligible decoder error probability. Hybrid automatic repeat request (HARQ) procedures are used to improve the accuracy in decoding by exploiting time-diversity at the expense of increased latency. Thus, achieving high reliability and low-latency are Pareto-optimal, which calls for a trade-off between the two. Concurrently, non-orthogonal multiple access (NOMA) has gained widespread attention in research due to the ability to outperform its counterpart, orthogonal multiple access (OMA) in terms of spectral efficiency and user fairness. This thesis investigates the performance of a two-user downlink NOMA system using HARQ with chase combining (HARQ-CC) in finite blocklength unifying the three enablers. First, an analytical framework is developed by deriving closed-form approximations for the individual average block error rate (BLER) of the near and the far user. Based upon that, the performance of NOMA is discussed in comparison to OMA, which draws the conclusion that NOMA outperforms OMA in terms of user fairness. Further, asymptotic expressions for average BLER are derived, which are used to devise an algorithm to determine such minimum blocklength and power allocation coefficients for NOMA that satisfies reliability targets for the users. NOMA has a lower blocklength in high transmit signal-to-noise ratio (SNR) conditions, leading to lower latency than OMA when reliability requirements in terms of BLER for the two users are in the order of 10^(-5)

    LiDAR aided human blockage prediction for 6G

    No full text
    Abstract Leveraging higher frequencies up to THz band paves the way towards a faster network in the next generation of wireless communications. However, such shorter wavelengths are susceptible to higher scattering and path loss forcing the link to depend predominantly on the line-of-sight (LOS) path. Dynamic movement of humans has been identified as a major source of blockages to such LOS links. In this work, we aim to overcome this challenge by predicting human blockages to the LOS link enabling the transmitter to anticipate the blockage and act intelligently. We propose an end-to-end system of infrastructure-mounted LiDAR sensors to capture the dynamics of the communication environment visually, process the data with deep learning and ray casting techniques to predict future blockages. Experiments indicate that the system achieves an accuracy of 87% predicting the upcoming blockages while maintaining a precision of 78% and a recall of 79% for a window of 300 ms

    Block error performance of NOMA with HARQ-CC in finite blocklength

    No full text
    Abstract This paper investigates the performance of a two-user downlink non-orthogonal multiple access (NOMA) system using hybrid automatic repeat request with chase combining (HARQ-CC) in finite blocklength. First, an analytical framework is developed by deriving closed-form approximations for the individual average block error rate (BLER) of the near and the far user. Based upon that, the performance of NOMA is discussed in comparison to orthogonal multiple access (OMA), which draws the conclusion that NOMA outperforms OMA in terms of user fairness. Further, an algorithm is devised to determine the required blocklength and power allocation coefficients for NOMA that satisfies reliability targets for the users. The required blocklength for NOMA is compared to OMA, which shows NOMA has a lower blocklength requirement in high transmit signal-to-noise ratio (SNR) conditions, leading to lower latency than OMA when reliability requirements in terms of BLER for the two users are in the order of 10 -5

    Hierarchical user clustering for mmWave-NOMA systems

    No full text
    Abstract Non-orthogonal multiple access (NOMA) and mmWave are two complementary technologies that can support the capacity demand that arises in 5G and beyond networks. The increasing number of users are served simultaneously while providing a solution for the scarcity of the bandwidth. In this paper we present a method for clustering the users in a mmWave-NOMA system with the objective of maximizing the sum-rate. An unsupervised machine learning technique, namely, hierarchical clustering is utilized which does the automatic identification of the optimal number of clusters. The simulations prove that the proposed method can maximize the sum-rate of the system while satisfying the minimum QoS for all users without the need of the number of clusters as a prerequisite when compared to other clustering methods such as k-means clustering

    Factory automation:resource allocation of an elevated LiDAR system with URLLC requirements

    No full text
    Abstract Ultra-reliable and low-latency communications (URLLC) play a vital role in factory automation. To share the situational awareness data collected from the infrastructure as raw or processed data, the system should guarantee the URLLC capability since this is a safety-critical application. In this work, the resource allocation problem for an infrastructure-based communication architecture (Elevated LiDAR system/ELiD) has been considered which can support the autonomous driving in a factory floor. The decoder error probability and the number of channel uses parameterize the reliability and the latency in the considered optimization problems. A maximum decoder error probability minimization problem and a total energy minimization problem have been considered in this work to analytically evaluate the performance of the ELiD system under different vehicle densities

    Elevated LiDAR based sensing for 6G:3D maps with cm level accuracy

    No full text
    Abstract Automating processes with the increased use of robots is one of the key vertical applications enabled by 6G. Sensing the surrounding environment, localization and communication become crucial factors for these robots to operate. Light detection and ranging (LiDAR) has emerged as an appropriate method for sensing due to its capability generating detail-rich positional information with high accuracy. However, LiDARs are power-hungry devices that generate bulk amounts of data, limiting their use as on-board sensors in robots. In this paper, we present a novel approach to the methodology of generating an enhanced 3D map with improved field-of-view using multiple LiDAR sensors. This offloads the sensing burden from robots to the infrastructure where a centralized communication network will establish localization. We utilize an inherent property of LiDAR point clouds; point rings with Inertial Measurement Unit (IMU) data embedded in the sensor for point cloud registration. The generated 3D point cloud map has an accuracy of 10 cm compared to the real-world measurements. We also carry out a proof of concept design of the proposed method using two LiDAR sensors fixed in the infrastructure at elevated positions. This extends to an application where a robot is navigated through the mapped environment using a wireless link with minimal support from the on-board sensors. Our results further validate the idea of using multiple elevated LiDARs as a part of the infrastructure for various localization applications

    Improving the attenuation of moving interfering objects in videos using shifted-velocity filtering

    No full text
    Abstract Three-dimensional space-time velocity filters may be used to enhance dynamic passband objects of interest in videos while attenuating moving interfering objects based on their velocities. In this paper, we show that the attenuation of interfering stopband objects may be significantly improved using recently proposed shifted-velocity filters. It is shown that an improvement of approximately 20 dB in signal-to-interference ratio may be achieved for stopband to passband velocity differences of only 1 pixels/frame. More importantly, this improvement is achieved without increasing the computational complexity

    LiDAR aided wireless networks:LoS detection and prediction based on static maps

    No full text
    Abstract The mmWave communication up to 71 GHz is already specified in 3rd generation partnership project (3GPP)5G New Radio (NR), and communication in sub-THz bands is being studied for 6G widely in the academia and industry. Operation with very narrow beamwidths and much higher bandwidths in contrast to Frequency Range 1 (sub-6 GHz) can cater to the high data rate requirements at the expense of extra signal processing burden to overcome the unfavourable conditions such as high attenuation and scattering in the presence of obstacles. Such severe signal power attenuation caused by an obstacle may degrade the network performance due to link failures occurring as a result of line-of-sight (LoS) to non-LoS (NLoS) transitions. These limitations raise the necessity of a sensing system to collect situational awareness data to assist the wireless communication network. This work proposes a method to improve the LoS detection and user localization accuracy using multiple light detection and ranging (LiDAR) sensors co-located in access points (APs). We also propose an approach to predict the LoS transitions based on static LiDAR maps and the proposed method detected the LoS transition 400ms before its occurrence

    LiDAR aided wireless networks:beam prediction for 5G

    No full text
    Abstract 5G New Radio (NR) mmWave operates with narrow beams. Beam-based connections require careful management of beams to ensure a reliable connection, specially when the user has mobility. 5G NR defines beam management procedures to achieve this, at the expense of periodic reporting with increased overheads and resource usage. Concurrently, recent interest in sensing for assisting wireless systems provides an opportunity to extract situational awareness information which can aid in proactive decisions for the network. In this work, we utilize an infrastructure-mounted light detection and ranging (LiDAR) sensor system simultaneously operating with the wireless system to predict future beam decisions. A recurrent neural network (RNN) based learning model is proposed for the beam prediction, employing tracking information of users facilitated by the LiDARs and beam sequence information from the wireless system. Furthermore, a method for predictive beam management with increased periodicity of the reporting mechanism and aperiodic reporting is analyzed. The results for the considered scenario reveal 86.8% of the resources can be saved compared to the conventional beam reporting procedure, while achieving an 88.7% accuracy for optimal beam decisions
    corecore