19 research outputs found

    Review of Recent Trends

    Get PDF
    This work was partially supported by the European Regional Development Fund (FEDER), through the Regional Operational Programme of Centre (CENTRO 2020) of the Portugal 2020 framework, through projects SOCA (CENTRO-01-0145-FEDER-000010) and ORCIP (CENTRO-01-0145-FEDER-022141). Fernando P. Guiomar acknowledges a fellowship from “la Caixa” Foundation (ID100010434), code LCF/BQ/PR20/11770015. Houda Harkat acknowledges the financial support of the Programmatic Financing of the CTS R&D Unit (UIDP/00066/2020).MIMO-OFDM is a key technology and a strong candidate for 5G telecommunication systems. In the literature, there is no convenient survey study that rounds up all the necessary points to be investigated concerning such systems. The current deeper review paper inspects and interprets the state of the art and addresses several research axes related to MIMO-OFDM systems. Two topics have received special attention: MIMO waveforms and MIMO-OFDM channel estimation. The existing MIMO hardware and software innovations, in addition to the MIMO-OFDM equalization techniques, are discussed concisely. In the literature, only a few authors have discussed the MIMO channel estimation and modeling problems for a variety of MIMO systems. However, to the best of our knowledge, there has been until now no review paper specifically discussing the recent works concerning channel estimation and the equalization process for MIMO-OFDM systems. Hence, the current work focuses on analyzing the recently used algorithms in the field, which could be a rich reference for researchers. Moreover, some research perspectives are identified.publishersversionpublishe

    Compressive Sensing Based Grant-Free Communication

    Get PDF
    Grant-free communication, where each user can transmit data without following the strict access grant process, is a promising technique to reduce latency and support massive users. In this thesis, compressive sensing (CS), which exploits signal sparsity to recover data from a small sample, is investigated for user activity detection (UAD), channel estimation, and signal detection in grant-free communication, in order to extract information from the signals received by base station (BS). First, CS aided UAD is investigated by utilizing the property of quasi-time-invariant channel tap delays as the prior information for the burst users in internet of things (IoT). Two UAD algorithms are proposed, which are referred to as gradient based and time-invariant channel tap delays assisted CS (g-TIDCS) and mean value based and TIDCS (m-TIDCS), respectively. In particular, g-TIDCS and m-TIDCS do not require any prior knowledge of the number of active users like the existing approaches and therefore are more practical. Second, periodic communication as one of the salient features of IoT is considered. Two schemes, namely periodic block orthogonal matching pursuit (PBOMP) and periodic block sparse Bayesian learning (PBSBL), are proposed to exploit the non-continuous temporal correlation of the received signal for joint UAD, channel estimation, and signal detection. The theoretical analysis and simulation results show that the PBOMP and PBSBL outperform the existing schemes in terms of the success rate of UAD, bit error rate (BER), and accuracy in period estimation and channel estimation. Third, UAD and channel estimation for grant-free communication in the presence of massive users that are actively connected to the BS is studied. An iteratively UAD and signal detection approach for the burst users is proposed, where the interference of the connected users on the burst users is reduced by applying a preconditioning matrix to the received signals at the BS. The proposed approach is capable of providing significant performance gains over the existing algorithms in terms of the success of UAD and BER. Last but not least, since the physical layer security becomes a critical issue for grant-free communication, the channel reciprocity in time-division duplex systems is utilized to design environment-aware (EA) pilots derived from transmission channels to prevent eavesdroppers from acquiring users’ channel information. The proposed EA-pilots based approach possesses a high level of security by scrambling the eavesdropper’s normalized mean square error performance of channel estimation

    Visible Light Communication (VLC)

    Get PDF
    Visible light communication (VLC) using light-emitting diodes (LEDs) or laser diodes (LDs) has been envisioned as one of the key enabling technologies for 6G and Internet of Things (IoT) systems, owing to its appealing advantages, including abundant and unregulated spectrum resources, no electromagnetic interference (EMI) radiation and high security. However, despite its many advantages, VLC faces several technical challenges, such as the limited bandwidth and severe nonlinearity of opto-electronic devices, link blockage and user mobility. Therefore, significant efforts are needed from the global VLC community to develop VLC technology further. This Special Issue, “Visible Light Communication (VLC)”, provides an opportunity for global researchers to share their new ideas and cutting-edge techniques to address the above-mentioned challenges. The 16 papers published in this Special Issue represent the fascinating progress of VLC in various contexts, including general indoor and underwater scenarios, and the emerging application of machine learning/artificial intelligence (ML/AI) techniques in VLC

    Hybrid generalized non-orthogonal multiple access for the 5G wireless networks.

    Get PDF
    Master of Science in Computer Engineering. University of KwaZulu-Natal. Durban, 2018.The deployment of 5G networks will lead to an increase in capacity, spectral efficiency, low latency and massive connectivity for wireless networks. They will still face the challenges of resource and power optimization, increasing spectrum efficiency and energy optimization, among others. Furthermore, the standardized technologies to mitigate against the challenges need to be developed and are a challenge themselves. In the current predecessor LTE-A networks, orthogonal frequency multiple access (OFDMA) scheme is used as the baseline multiple access scheme. It allows users to be served orthogonally in either time or frequency to alleviate narrowband interference and impulse noise. Further spectrum limitations of orthogonal multiple access (OMA) schemes have resulted in the development of non-orthogonal multiple access (NOMA) schemes to enable 5G networks to achieve high spectral efficiency and high data rates. NOMA schemes unorthogonally co-multiplex different users on the same resource elements (RE) (i.e. time-frequency domain, OFDMA subcarrier, or spreading code) via power domain (PD) or code domain (CD) at the transmitter and successfully separating them at the receiver by applying multi-user detection (MUD) algorithms. The current developed NOMA schemes, refered to as generalized-NOMA (G-NOMA) technologies includes; Interleaver Division Multiple Access (IDMA, Sparse code multiple access (SCMA), Low-density spreading multiple access (LDSMA), Multi-user shared access (MUSA) scheme and the Pattern Division Multiple Access (PDMA). These protocols are currently still under refinement, their performance and applicability has not been thoroughly investigated. The first part of this work undertakes a thorough investigation and analysis of the performance of the existing G-NOMA schemes and their applicability. Generally, G-NOMA schemes perceives overloading by non-orthogonal spectrum resource allocation, which enables massive connectivity of users and devices, and offers improved system spectral efficiency. Like any other technologies, the G-NOMA schemes need to be improved to further harvest their benefits on 5G networks leading to the requirement of Hybrid G-NOMA (G-NOMA) schemes. The second part of this work develops a HG-NOMA scheme to alleviate the 5G challenges of resource allocation, inter and cross-tier interference management and energy efficiency. This work develops and investigates the performance of an Energy Efficient HG-NOMA resource allocation scheme for a two-tier heterogeneous network that alleviates the cross-tier interference and improves the system throughput via spectrum resource optimization. By considering the combinatorial problem of resource pattern assignment and power allocation, the HG-NOMA scheme will enable a new transmission policy that allows more than two macro-user equipment’s (MUEs) and femto-user equipment’s (FUEs) to be co-multiplexed on the same time-frequency RE increasing the spectral efficiency. The performance of the developed model is shown to be superior to the PD-NOMA and OFDMA schemes

    The Interplay between Computation and Communication

    Get PDF
    In this thesis, a comprehensive exploration into the integration of communication and learning within the massive Internet of Things (mIoT) is undertaken. Addressing one of the fundamental challenges of mIoT, where traditional channel estimation methods prove inefficient due to high device density and short packets; initially, a novel approach leveraging unsupervised machine learning for joint channel estimation and signal detection is proposed. This technique utilizes the Gaussian mixture model (GMM) clustering of received signals, thereby reducing the necessity for exhaustive channel estimation, decreasing the number of required pilot symbols, and enhancing symbol error rate (SER) performance. Building on this foundation, an innovative method is proposed that eliminates the need for pilot symbols entirely. By coupling GMM clustering with rotational invariant (RI) coding, the model maintains robust performance against the effects of channel rotation, thereby improving the efficiency of mIoT systems. This research delves further into integrating communication and learning in mIoT, specifically focusing on federated learning (FL) convergence under error-prone conditions. It carefully analyzes the impact of factors like block length, coding rate, and signal-to-noise ratio on FL's accuracy and convergence. A novel approach is proposed to address communication error challenges, where the base station (BS) uses memory to cache key parameters. Closing the thesis, an extensive simulation of a real-world mIoT system, integrating previously developed techniques, such as the innovative channel estimation method, RI coding, and the introduced FL model. It notably demonstrates that optimal learning outcomes can be achieved even without stringent communication reliability. Thus, this work not only achieves comparable or superior performance to traditional methods with fewer pilot symbols but also provides valuable insights for optimizing mIoT systems within the FL framework

    Joint Communication and Positioning based on Channel Estimation

    Get PDF
    Mobile wireless communication systems have rapidly and globally become an integral part of everyday life and have brought forth the internet of things. With the evolution of mobile wireless communication systems, joint communication and positioning becomes increasingly important and enables a growing range of new applications. Humanity has already grown used to having access to multimedia data everywhere at every time and thereby employing all sorts of location-based services. Global navigation satellite systems can provide highly accurate positioning results whenever a line-of-sight path is available. Unfortunately, harsh physical environments are known to degrade the performance of existing systems. Therefore, ground-based systems can assist the existing position estimation gained by satellite systems. Determining positioning-relevant information from a unified signal structure designed for a ground-based joint communication and positioning system can either complement existing systems or substitute them. Such a system framework promises to enhance the existing systems by enabling a highly accurate and reliable positioning performance and increased coverage. Furthermore, the unified signal structure yields synergetic effects. In this thesis, I propose a channel estimation-based joint communication and positioning system that employs a virtual training matrix. This matrix consists of a relatively small training percentage, plus the detected communication data itself. Via a core semi- blind estimation approach, this iteratively includes the already detected data to accurately determine the positioning-relevant parameter, by mutually exchanging information between the communication part and the positioning part of the receiver. Synergy is created. I propose a generalized system framework, suitable to be used in conjunction with various communication system techniques. The most critical positioning-relevant parameter, the time-of-arrival, is part of a physical multipath parameter vector. Estimating the time-of-arrival, therefore, means solving a global, non-linear, multi-dimensional optimization problem. More precisely, it means solving the so-called inverse problem. I thoroughly assess various problem formulations and variations thereof, including several different measurements and estimation algorithms. A significant challenge, when it comes to solving the inverse problem to determine the positioning-relevant path parameters, is imposed by realistic multipath channels. Most parameter estimation algorithms have proven to perform well in moderate multipath environments. It is mathematically straightforward to optimize this performance in the sense that the number of observations has to exceed the number of parameters to be estimated. The typical parameter estimation problem, on the other hand, is based on channel estimates, and it assumes that so-called snapshot measurements are available. In the case of realistic channel models, however, the number of observations does not necessarily exceed the number of unknowns. In this thesis, I overcome this problem, proposing a method to reduce the problem dimensionality via joint model order selection and parameter estimation. Employing the approximated and estimated parameter covariance matrix inherently constrains the estimation problem’s model order selection to result in optimal parameter estimation performance and hence optimal positioning performance. To compare these results with the optimally achievable solution, I introduce a focused order-related lower bound in this thesis. Additionally, I use soft information as a weighting matrix to enhance the positioning algorithm positioning performance. For demonstrating the feasibility and the interplay of the proposed system components, I utilize a prototype system, based on multi-layer interleave division multiple access. This proposed system framework and the investigated techniques can be employed for multiple existing systems or build the basis for future joint communication and positioning systems. The assessed estimation algorithms are transferrable to all kinds of joint communication and positioning system designs. This thesis demonstrates their capability to, in principle, successfully cope with challenging estimation problems stemming from harsh physical environments
    corecore