43 research outputs found
LR based pre-coding aided spatial modulation with sub-optimal detection for V2X communications
In this manuscript, a novel transmit pre-coding matrix generation method and a linear decoding scheme are proposed for the generalised pre-coding aided spatial modulation (GPSM) system. The GPSM scheme proposed recently is a new multiple input multiple output (MIMO) transmission technique, which conveys information by activating a subset of receive antennas with the aid of transmit pre-coding. This scheme seems to be suitable in V2X since the base station only transmits data to certain users/vehicles. The proposed pre-coding matrix is based on the lattice reduction (LR) principle and provides significant performance improvement over the original pre-coding design in GPSM. Furthermore, as the GPSM in large-scale antenna systems, which is a trend in future communication systems, might be too complex to be implemented with the maximum likelihood (ML) detection, a linear decoding method is proposed to reduce the implementation complexity. Our studies show that the performance degradation caused by the linear detector can be compensated by the proposed LR based pre-coding. As a result, the LR based GPSM with linear detection is capable of achieving the comparable performance as that of the original GPSM scheme employing ML detection, while with significantly decreased detection complexity
Recommended from our members
Millimeter wave link configuration in practical scenarios
Acquiring channel state information (CSI) for link configuration in wideband millimeter wave (mmWave) massive multiple-input-multiple-output (MIMO) systems with hybrid architectures is challenging, due to the high dimensions of the channel matrices, the low signal-to-noise ratio (SNR) before beamforming, the various hardware constraints and the high mobility in the vehicular context. Previous work in this area exploits channel sparsity, statistical priors or side information to reduce the overhead associated to initial channel estimation or channel tracking. These works consider, however, a system model that neglects hardware imperfections. In addition, many of the proposed solutions are unable to operate in some realistic scenarios, such as vehicle-to-everything (V2X) communications.
In this dissertation, we develop new signal processing solutions that can enable low-overhead mmWave link configuration under various disturbances and practical limitations, e.g., hardware impairments, calibration errors, beam squint effect, channel blockage, high mobility, to name a few.
In the first part of this dissertation, we focus on the problem of wideband channel estimation for mmWave MIMO systems with different hardware imperfections.
We first design a dictionary learning aided channel estimation strategy for wideband mmWave MIMO systems by explicitly considering the hardware uncertainties and calibration errors, and then derive algorithms that learn the optimal sparsifying dictionaries for channel representation and estimation. In a second contribution of this part, we further develop a dictionary learning aided compressive channel estimation scheme for mmWave MIMO systems by incorporating beam squint into the model of array responses. Numerical results show the proposed solutions can adapt to the practical scenarios and help reduce the overhead associated with channel estimation significantly.
In the second part of this dissertation, we deal with the problem of wideband channel tracking for mmWave MIMO systems with or without the impact of blockage.
We first introduce statistical channel models that include the evolution models for channel gains and angles of arrival/departure, as well as the statistics of blockage events. Then, we design novel blockage detection schemes and efficient Bayesian channel tracking algorithms to facilitate the low-overhead tracking with or without blockage. Numerical results corroborate that the proposed solutions achieve better channel tracking performance even in mobile scenarios that suffer from highly dynamic blockage events.Electrical and Computer Engineerin
Deep Learning Designs for Physical Layer Communications
Wireless communication systems and their underlying technologies have undergone unprecedented advances over the last two decades to assuage the ever-increasing demands for various applications and emerging technologies. However, the traditional signal processing schemes and algorithms for wireless communications cannot handle the upsurging complexity associated with fifth-generation (5G) and beyond communication systems due to network expansion, new emerging technologies, high data rate, and the ever-increasing demands for low latency. This thesis extends the traditional downlink transmission schemes to deep learning-based precoding and detection techniques that are hardware-efficient and of lower complexity than the current state-of-the-art. The thesis focuses on: precoding/beamforming in massive multiple-inputs-multiple-outputs (MIMO), signal detection and lightweight neural network (NN) architectures for precoder and decoder designs. We introduce a learning-based precoder design via constructive interference (CI) that performs the precoding on a symbol-by-symbol basis. Instead
of conventionally training a NN without considering the specifics of the optimisation objective, we unfold a power minimisation symbol level precoding (SLP) formulation based on the interior-point-method (IPM) proximal âlogâ barrier function. Furthermore, we propose a concept of NN compression, where the weights are quantised to lower numerical precision formats based on binary and ternary quantisations. We further introduce a stochastic quantisation technique, where parts of the NN weight matrix are quantised while the remaining is not. Finally, we propose a systematic complexity scaling of deep neural network (DNN) based MIMO detectors. The model uses a fraction of the DNN inputs by scaling their values through weights that follow monotonically non-increasing functions. Furthermore, we investigate performance complexity tradeoffs via regularisation constraints on the layer weights such that, at inference, parts of network layers can be removed with minimal impact on the detection accuracy. Simulation results show that our proposed learning-based techniques offer better complexity-vs-BER (bit-error-rate) and complexity-vs-transmit power performances compared to the state-of-the-art MIMO detection and precoding techniques
LiFi Transceiver Designs for 6G Wireless Networks
Due to the dramatic increase in high data rate services, and in order to meet the demands of the sixth-generation (6G) wireless networks, researchers from both academia and industry have been exploring advanced transmission techniques, new network archi-
tectures and new frequency bands, such as the millimeter wave (mmWave), the infrared, and the visible light bands. Light-fdelity (LiFi) particularly is an emerging, novel, bidirectional, high-speed and fully networked optical wireless communication (OWC) technology that has been introduced as a promising solution for 6G networks, especially for indoor connectivity, owing to the large unexploited spectrum that translates to signifcantly high data rates.
Although there has been a big leap in the maturity of the LiFi technology, there is still a considerable gap between the available LiFi technology and the required demands of 6G networks. Motivated by this, this dissertation aims to bridge between the current research literature of LiFi and the expected demands of 6G networks. Specifcally, the key goal of this dissertation is to fll some shortcomings in the LiFi technology, such as channel modeling, transceiver designs, channel state information (CSI) acquisition, localization, quality-of-service (QoS), and performance optimization. Our work is devoted to address and solve some of these limitations. Towards achieving this goal, this dissertation makes signifcant contributions to several areas of LiFi. First, it develops novel and measurements-based channel models for LiFi systems that are required for performance analysis and handover management. Second, it proposes a novel design for LiFi devices that is capable of alleviating the real behaviour of users and the impurities of indoor
propagation environments. Third, it proposes intelligent, accurate and fast joint position and orientation techniques for LiFi devices, which improve the CSI estimation process and boost the indoor location-based and navigation-based services. Then, it proposes novel proactive optimization technique that can provide near-optimal and real-time service for indoor mobile LiFi users that are running some services with high data rates, such as extended reality, video conferencing, and real-time video monitoring. Finally, it proposes advanced multiple access techniques that are capable of cancelling the efects of interference in indoor multi-user settings. The studied problems are tackled using various tools from probability and statistic theory, system design and integration theory, optimization theory, and deep learning. The Results demonstrate the efectiveness of the proposed designs, solutions, and techniques. Nevertheless, the fndings in this dissertation highlight key guidelines for the efective design of LiFi while considering their unique propagation
features
Self-Evolving Integrated Vertical Heterogeneous Networks
6G and beyond networks tend towards fully intelligent and adaptive design in
order to provide better operational agility in maintaining universal wireless
access and supporting a wide range of services and use cases while dealing with
network complexity efficiently. Such enhanced network agility will require
developing a self-evolving capability in designing both the network
architecture and resource management to intelligently utilize resources, reduce
operational costs, and achieve the coveted quality of service (QoS). To enable
this capability, the necessity of considering an integrated vertical
heterogeneous network (VHetNet) architecture appears to be inevitable due to
its high inherent agility. Moreover, employing an intelligent framework is
another crucial requirement for self-evolving networks to deal with real-time
network optimization problems. Hence, in this work, to provide a better insight
on network architecture design in support of self-evolving networks, we
highlight the merits of integrated VHetNet architecture while proposing an
intelligent framework for self-evolving integrated vertical heterogeneous
networks (SEI-VHetNets). The impact of the challenges associated with
SEI-VHetNet architecture, on network management is also studied considering a
generalized network model. Furthermore, the current literature on network
management of integrated VHetNets along with the recent advancements in
artificial intelligence (AI)/machine learning (ML) solutions are discussed.
Accordingly, the core challenges of integrating AI/ML in SEI-VHetNets are
identified. Finally, the potential future research directions for advancing the
autonomous and self-evolving capabilities of SEI-VHetNets are discussed.Comment: 25 pages, 5 figures, 2 table
Recent Advances in Cellular D2D Communications
Device-to-device (D2D) communications have attracted a great deal of attention from researchers in recent years. It is a promising technique for offloading local traffic from cellular base stations by allowing local devices, in physical proximity, to communicate directly with each other. Furthermore, through relaying, D2D is also a promising approach to enhancing service coverage at cell edges or in black spots. However, there are many challenges to realizing the full benefits of D2D. For one, minimizing the interference between legacy cellular and D2D users operating in underlay mode is still an active research issue. With the 5th generation (5G) communication systems expected to be the main data carrier for the Internet-of-Things (IoT) paradigm, the potential role of D2D and its scalability to support massive IoT devices and their machine-centric (as opposed to human-centric) communications need to be investigated. New challenges have also arisen from new enabling technologies for D2D communications, such as non-orthogonal multiple access (NOMA) and blockchain technologies, which call for new solutions to be proposed. This edited book presents a collection of ten chapters, including one review and nine original research works on addressing many of the aforementioned challenges and beyond
Neural-Kalman Schemes for Non-Stationary Channel Tracking and Learning
This Thesis focuses on channel tracking in Orthogonal Frequency-Division Multiplexing (OFDM), a
widely-used method of data transmission in wireless communications, when abrupt changes occur
in the channel. In highly mobile applications, new dynamics appear that might make channel
tracking non-stationary, e.g. channels might vary with location, and location rapidly varies with
time. Simple examples might be the di erent channel dynamics a train receiver faces when it is
close to a station vs. crossing a bridge vs. entering a tunnel, or a car receiver in a route that
grows more tra c-dense. Some of these dynamics can be modelled as channel taps dying or being
reborn, and so tap birth-death detection is of the essence.
In order to improve the quality of communications, we delved into mathematical methods to
detect such abrupt changes in the channel, such as the mathematical areas of Sequential Analysis/
Abrupt Change Detection and Random Set Theory (RST), as well as the engineering advances
in Neural Network schemes. This knowledge helped us nd a solution to the problem of abrupt
change detection by informing and inspiring the creation of low-complexity implementations for
real-world channel tracking. In particular, two such novel trackers were created: the Simpli-
ed Maximum A Posteriori (SMAP) and the Neural-Network-switched Kalman Filtering (NNKF)
schemes.
The SMAP is a computationally inexpensive, threshold-based abrupt-change detector. It applies
the three following heuristics for tap birth-death detection: a) detect death if the tap gain
jumps into approximately zero (memoryless detection); b) detect death if the tap gain has slowly
converged into approximately zero (memory detection); c) detect birth if the tap gain is far from
zero.
The precise parameters for these three simple rules can be approximated with simple theoretical
derivations and then ne-tuned through extensive simulations. The status detector for each
tap using only these three computationally inexpensive threshold comparisons achieves an error
reduction matching that of a close-to-perfect path death/birth detection, as shown in simulations.
This estimator was shown to greatly reduce channel tracking error in the target Signal-to-Noise
Ratio (SNR) range at a very small computational cost, thus outperforming previously known systems.
The underlying RST framework for the SMAP was then extended to combined death/birth
and SNR detection when SNR is dynamical and may drift. We analyzed how di erent quasi-ideal
SNR detectors a ect the SMAP-enhanced Kalman tracker's performance. Simulations showed
SMAP is robust to SNR drift in simulations, although it was also shown to bene t from an accurate
SNR detection.
The core idea behind the second novel tracker, NNKFs, is similar to the SMAP, but now the tap
birth/death detection will be performed via an arti cial neuronal network (NN). Simulations show
that the proposed NNKF estimator provides extremely good performance, practically identical to a detector with 100% accuracy.
These proposed Neural-Kalman schemes can work as novel trackers for multipath channels,
since they are robust to wide variations in the probabilities of tap birth and death. Such robustness
suggests a single, low-complexity NNKF could be reusable over di erent tap indices and
communication environments.
Furthermore, a di erent kind of abrupt change was proposed and analyzed: energy shifts from
one channel tap to adjacent taps (partial tap lateral hops). This Thesis also discusses how to
model, detect and track such changes, providing a geometric justi cation for this and additional
non-stationary dynamics in vehicular situations, such as road scenarios where re ections on trucks
and vans are involved, or the visual appearance/disappearance of drone swarms. An extensive
literature review of empirically-backed abrupt-change dynamics in channel modelling/measuring
campaigns is included.
For this generalized framework of abrupt channel changes that includes partial tap lateral
hopping, a neural detector for lateral hops with large energy transfers is introduced. Simulation
results suggest the proposed NN architecture might be a feasible lateral hop detector, suitable for
integration in NNKF schemes.
Finally, the newly found understanding of abrupt changes and the interactions between Kalman
lters and neural networks is leveraged to analyze the neural consequences of abrupt changes
and brie y sketch a novel, abrupt-change-derived stochastic model for neural intelligence, extract
some neuro nancial consequences of unstereotyped abrupt dynamics, and propose a new
portfolio-building mechanism in nance: Highly Leveraged Abrupt Bets Against Failing Experts
(HLABAFEOs). Some communication-engineering-relevant topics, such as a Bayesian stochastic
stereotyper for hopping Linear Gauss-Markov (LGM) models, are discussed in the process.
The forecasting problem in the presence of expert disagreements is illustrated with a hopping
LGM model and a novel structure for a Bayesian stereotyper is introduced that might eventually
solve such problems through bio-inspired, neuroscienti cally-backed mechanisms, like dreaming
and surprise (biological Neural-Kalman). A generalized framework for abrupt changes and expert
disagreements was introduced with the novel concept of Neural-Kalman Phenomena. This Thesis
suggests mathematical (Neural-Kalman Problem Category Conjecture), neuro-evolutionary and
social reasons why Neural-Kalman Phenomena might exist and found signi cant evidence for their
existence in the areas of neuroscience and nance.
Apart from providing speci c examples, practical guidelines and historical (out)performance
for some HLABAFEO investing portfolios, this multidisciplinary research suggests that a Neural-
Kalman architecture for ever granular stereotyping providing a practical solution for continual
learning in the presence of unstereotyped abrupt dynamics would be extremely useful in communications
and other continual learning tasks.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Luis Castedo Ribas.- Secretaria: Ana GarcĂa Armada.- Vocal: JosĂ© Antonio Portilla Figuera
Improved handover decision scheme for 5g mm-wave communication: optimum base station selection using machine learning approach.
A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Information and Communication Science and Engineering of the Nelson Mandela African Institution of Science and TechnologyThe rapid growth in mobile and wireless devices has led to an exponential demand for data traf fic and exacerbated the burden on conventional wireless networks. Fifth generation (5G) and
beyond networks are expected to not only accommodate this growth in data demand but also
provide additional services beyond the capability of existing wireless networks, while main taining a high quality-of-experience (QoE) for users. The need for several orders of magnitude
increase in system capacity has necessitated the use of millimetre wave (mm-wave) frequencies
as well as the proliferation of low-power small cells overlaying the existing macro-cell layer.
These approaches offer a potential increase in throughput in magnitudes of several gigabits per
second and a reduction in transmission latency, but they also present new challenges. For exam ple, mm-wave frequencies have higher propagation losses and a limited coverage area, thereby
escalating mobility challenges such as more frequent handovers (HOs). In addition, the ad vent of low-power small cells with smaller footprints also causes signal fluctuations across the
network, resulting in repeated HOs (ping-pong) from one small cell (SC) to another.
Therefore, efficient HO management is very critical in future cellular networks since frequent
HOs pose multiple threats to the quality-of-service (QoS), such as a reduction in the system
throughput as well as service interruptions, which results in a poor QoE for the user. How ever, HO management is a significant challenge in 5G networks due to the use of mm-wave
frequencies which have much smaller footprints. To address these challenges, this work in vestigates the HO performance of 5G mm-wave networks and proposes a novel method for
achieving seamless user mobility in dense networks. The proposed model is based on a double
deep reinforcement learning (DDRL) algorithm. To test the performance of the model, a com parative study was made between the proposed approach and benchmark solutions, including a
benchmark developed as part of this thesis. The evaluation metrics considered include system
throughput, execution time, ping-pong, and the scalability of the solutions. The results reveal
that the developed DDRL-based solution vastly outperforms not only conventional methods but
also other machine-learning-based benchmark techniques.
The main contribution of this thesis is to provide an intelligent framework for mobility man agement in the connected state (i.e HO management) in 5G. Though primarily developed for
mm-wave links between UEs and BSs in ultra-dense heterogeneous networks (UDHNs), the
proposed framework can also be applied to sub-6 GHz frequencies