65 research outputs found

    Orthogonal frequency division multiplexing multiple-input multiple-output automotive radar with novel signal processing algorithms

    Get PDF
    Advanced driver assistance systems that actively assist the driver based on environment perception achieved significant advances in recent years. Along with this development, autonomous driving became a major research topic that aims ultimately at development of fully automated, driverless vehicles. Since such applications rely on environment perception, their ever increasing sophistication imposes growing demands on environmental sensors. Specifically, the need for reliable environment sensing necessitates the development of more sophisticated, high-performance radar sensors. A further vital challenge in terms of increased radar interference arises with the growing market penetration of the vehicular radar technology. To address these challenges, in many respects novel approaches and radar concepts are required. As the modulation is one of the key factors determining the radar performance, the research of new modulation schemes for automotive radar becomes essential. A topic that emerged in the last years is the radar operating with digitally generated waveforms based on orthogonal frequency division multiplexing (OFDM). Initially, the use of OFDM for radar was motivated by the combination of radar with communication via modulation of the radar waveform with communication data. Some subsequent works studied the use of OFDM as a modulation scheme in many different radar applications - from adaptive radar processing to synthetic aperture radar. This suggests that the flexibility provided by OFDM based digital generation of radar waveforms can potentially enable novel radar concepts that are well suited for future automotive radar systems. This thesis aims to explore the perspectives of OFDM as a modulation scheme for high-performance, robust and adaptive automotive radar. To this end, novel signal processing algorithms and OFDM based radar concepts are introduced in this work. The main focus of the thesis is on high-end automotive radar applications, while the applicability for real time implementation is of primary concern. The first part of this thesis focuses on signal processing algorithms for distance-velocity estimation. As a foundation for the algorithms presented in this thesis, a novel and rigorous signal model for OFDM radar is introduced. Based on this signal model, the limitations of the state-of-the-art OFDM radar signal processing are pointed out. To overcome these limitations, we propose two novel signal processing algorithms that build upon the conventional processing and extend it by more sophisticated modeling of the radar signal. The first method named all-cell Doppler compensation (ACDC) overcomes the Doppler sensitivity problem of OFDM radar. The core idea of this algorithm is the scenario-independent correction of Doppler shifts for the entire measurement signal. Since Doppler effect is a major concern for OFDM radar and influences the radar parametrization, its complete compensation opens new perspectives for OFDM radar. It not only achieves an improved, Doppler-independent performance, it also enables more favorable system parametrization. The second distance-velocity estimation algorithm introduced in this thesis addresses the issue of range and Doppler frequency migration due to the target’s motion during the measurement. For the conventional radar signal processing, these migration effects set an upper limit on the simultaneously achievable distance and velocity resolution. The proposed method named all-cell migration compensation (ACMC) extends the underlying OFDM radar signal model to account for the target motion. As a result, the effect of migration is compensated implicitly for the entire radar measurement, which leads to an improved distance and velocity resolution. Simulations show the effectiveness of the proposed algorithms in overcoming the two major limitations of the conventional OFDM radar signal processing. As multiple-input multiple-output (MIMO) radar is a well-established technology for improving the direction-of-arrival (DOA) estimation, the second part of this work studies the multiplexing methods for OFDM radar that enable simultaneous use of multiple transmit antennas for MIMO radar processing. After discussing the drawbacks of known multiplexing methods, we introduce two advanced multiplexing schemes for OFDM-MIMO radar based on non-equidistant interleaving of OFDM subcarriers. These multiplexing approaches exploit the multicarrier structure of OFDM for generation of orthogonal waveforms that enable a simultaneous operation of multiple MIMO channels occupying the same bandwidth. The primary advantage of these methods is that despite multiplexing they maintain all original radar parameters (resolution and unambiguous range in distance and velocity) for each individual MIMO channel. To obtain favorable interleaving patterns with low sidelobes, we propose an optimization approach based on genetic algorithms. Furthermore, to overcome the drawback of increased sidelobes due to subcarrier interleaving, we study the applicability of sparse processing methods for the distance-velocity estimation from measurements of non-equidistantly interleaved OFDM-MIMO radar. We introduce a novel sparsity based frequency estimation algorithm designed for this purpose. The third topic addressed in this work is the robustness of OFDM radar to interference from other radar sensors. In this part of the work we study the interference robustness of OFDM radar and propose novel interference mitigation techniques. The first interference suppression algorithm we introduce exploits the robustness of OFDM to narrowband interference by dropping subcarriers strongly corrupted by interference from evaluation. To avoid increase of sidelobes due to missing subcarriers, their values are reconstructed from the neighboring ones based on linear prediction methods. As a further measure for increasing the interference robustness in a more universal manner, we propose the extension of OFDM radar with cognitive features. We introduce the general concept of cognitive radar that is capable of adapting to the current spectral situation for avoiding interference. Our work focuses mainly on waveform adaptation techniques; we propose adaptation methods that allow dynamic interference avoidance without affecting adversely the estimation performance. The final part of this work focuses on prototypical implementation of OFDM-MIMO radar. With the constructed prototype, the feasibility of OFDM for high-performance radar applications is demonstrated. Furthermore, based on this radar prototype the algorithms presented in this thesis are validated experimentally. The measurements confirm the applicability of the proposed algorithms and concepts for real world automotive radar implementations

    Integrated Sensing and Communication Signals Toward 5G-A and 6G: A Survey

    Full text link
    Integrated sensing and communication (ISAC) has the advantages of efficient spectrum utilization and low hardware cost. It is promising to be implemented in the fifth-generation-advanced (5G-A) and sixth-generation (6G) mobile communication systems, having the potential to be applied in intelligent applications requiring both communication and high-accurate sensing capabilities. As the fundamental technology of ISAC, ISAC signal directly impacts the performance of sensing and communication. This article systematically reviews the literature on ISAC signals from the perspective of mobile communication systems, including ISAC signal design, ISAC signal processing algorithms and ISAC signal optimization. We first review the ISAC signal design based on 5G, 5G-A and 6G mobile communication systems. Then, radar signal processing methods are reviewed for ISAC signals, mainly including the channel information matrix method, spectrum lines estimator method and super resolution method. In terms of signal optimization, we summarize peak-to-average power ratio (PAPR) optimization, interference management, and adaptive signal optimization for ISAC signals. This article may provide the guidelines for the research of ISAC signals in 5G-A and 6G mobile communication systems.Comment: 25 pages, 13 figures, 8 tables. IEEE Internet of Things Journal, 202

    AN EFFICIENT INTERFERENCE AVOIDANCE SCHEME FOR DEVICE-TODEVICE ENABLED FIFTH GENERATION NARROWBAND INTERNET OF THINGS NETWOKS’

    Get PDF
    Narrowband Internet of Things (NB-IoT) is a low-power wide-area (LPWA) technology built on long-term evolution (LTE) functionalities and standardized by the 3rd-Generation Partnership Project (3GPP). Due to its support for massive machine-type communication (mMTC) and different IoT use cases with rigorous standards in terms of connection, energy efficiency, reachability, reliability, and latency, NB-IoT has attracted the research community. However, as the capacity needs for various IoT use cases expand, the LTE evolved packet core (EPC) system's numerous functionalities may become overburdened and suboptimal. Several research efforts are currently in progress to address these challenges. As a result, an overview of these efforts with a specific focus on the optimized architecture of the LTE EPC functionalities, the 5G architectural design for NB-IoT integration, the enabling technologies necessary for 5G NB-IoT, 5G new radio (NR) coexistence with NB-IoT, and feasible architectural deployment schemes of NB-IoT with cellular networks is discussed. This thesis also presents cloud-assisted relay with backscatter communication as part of a detailed study of the technical performance attributes and channel communication characteristics from the physical (PHY) and medium access control (MAC) layers of the NB-IoT, with a focus on 5G. The numerous drawbacks that come with simulating these systems are explored. The enabling market for NB-IoT, the benefits for a few use cases, and the potential critical challenges associated with their deployment are all highlighted. Fortunately, the cyclic prefix orthogonal frequency division multiplexing (CPOFDM) based waveform by 3GPP NR for improved mobile broadband (eMBB) services does not prohibit the use of other waveforms in other services, such as the NB-IoT service for mMTC. As a result, the coexistence of 5G NR and NB-IoT must be manageably orthogonal (or quasi-orthogonal) to minimize mutual interference that limits the form of freedom in the waveform's overall design. As a result, 5G coexistence with NB-IoT will introduce a new interference challenge, distinct from that of the legacy network, even though the NR's coexistence with NB-IoT is believed to improve network capacity and expand the coverage of the user data rate, as well as improves robust communication through frequency reuse. Interference challenges may make channel estimation difficult for NB-IoT devices, limiting the user performance and spectral efficiency. Various existing interference mitigation solutions either add to the network's overhead, computational complexity and delay or are hampered by low data rate and coverage. These algorithms are unsuitable for an NB-IoT network owing to the low-complexity nature. As a result, a D2D communication based interference-control technique becomes an effective strategy for addressing this problem. This thesis used D2D communication to decrease the network bottleneck in dense 5G NBIoT networks prone to interference. For D2D-enabled 5G NB-IoT systems, the thesis presents an interference-avoidance resource allocation that considers the less favourable cell edge NUEs. To simplify the algorithm's computing complexity and reduce interference power, the system divides the optimization problem into three sub-problems. First, in an orthogonal deployment technique using channel state information (CSI), the channel gain factor is leveraged by selecting a probable reuse channel with higher QoS control. Second, a bisection search approach is used to find the best power control that maximizes the network sum rate, and third, the Hungarian algorithm is used to build a maximum bipartite matching strategy to choose the optimal pairing pattern between the sets of NUEs and the D2D pairs. The proposed approach improves the D2D sum rate and overall network SINR of the 5G NB-IoT system, according to the numerical data. The maximum power constraint of the D2D pair, D2D's location, Pico-base station (PBS) cell radius, number of potential reuse channels, and cluster distance impact the D2D pair's performance. The simulation results achieve 28.35%, 31.33%, and 39% SINR performance higher than the ARSAD, DCORA, and RRA algorithms when the number of NUEs is twice the number of D2D pairs, and 2.52%, 14.80%, and 39.89% SINR performance higher than the ARSAD, RRA, and DCORA when the number of NUEs and D2D pairs are equal. As a result, a D2D sum rate increase of 9.23%, 11.26%, and 13.92% higher than the ARSAD, DCORA, and RRA when the NUE’s number is twice the number of D2D pairs, and a D2D’s sum rate increase of 1.18%, 4.64% and 15.93% higher than the ARSAD, RRA and DCORA respectively, with an equal number of NUEs and D2D pairs is achieved. The results demonstrate the efficacy of the proposed scheme. The thesis also addressed the problem where the cell-edge NUE's QoS is critical to challenges such as long-distance transmission, delays, low bandwidth utilization, and high system overhead that affect 5G NB-IoT network performance. In this case, most cell-edge NUEs boost their transmit power to maximize network throughput. Integrating cooperating D2D relaying technique into 5G NB-IoT heterogeneous network (HetNet) uplink spectrum sharing increases the system's spectral efficiency and interference power, further degrading the network. Using a max-max SINR (Max-SINR) approach, this thesis proposed an interference-aware D2D relaying strategy for 5G NB-IoT QoS improvement for a cell-edge NUE to achieve optimum system performance. The Lagrangian-dual technique is used to optimize the transmit power of the cell-edge NUE to the relay based on the average interference power constraint, while the relay to the NB-IoT base station (NBS) employs a fixed transmit power. To choose an optimal D2D relay node, the channel-to-interference plus noise ratio (CINR) of all available D2D relays is used to maximize the minimum cell-edge NUE's data rate while ensuring the cellular NUEs' QoS requirements are satisfied. Best harmonic mean, best-worst, half-duplex relay selection, and a D2D communication scheme were among the other relaying selection strategies studied. The simulation results reveal that the Max-SINR selection scheme outperforms all other selection schemes due to the high channel gain between the two communication devices except for the D2D communication scheme. The proposed algorithm achieves 21.27% SINR performance, which is nearly identical to the half-duplex scheme, but outperforms the best-worst and harmonic selection techniques by 81.27% and 40.29%, respectively. As a result, as the number of D2D relays increases, the capacity increases by 14.10% and 47.19%, respectively, over harmonic and half-duplex techniques. Finally, the thesis presents future research works on interference control in addition with the open research directions on PHY and MAC properties and a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis presented in Chapter 2 to encourage further study on 5G NB-IoT

    Learning Robust Radio Frequency Fingerprints Using Deep Convolutional Neural Networks

    Get PDF
    Radio Frequency Fingerprinting (RFF) techniques, which attribute uniquely identifiable signal distortions to emitters via Machine Learning (ML) classifiers, are limited by fingerprint variability under different operational conditions. First, this work studied the effect of frequency channel for typical RFF techniques. Performance characterization using the multi-class Matthews Correlation Coefficient (MCC) revealed that using frequency channels other than those used to train the models leads to deterioration in MCC to under 0.05 (random guess), indicating that single-channel models are inadequate for realistic operation. Second, this work presented a novel way of studying fingerprint variability through Fingerprint Extraction through Distortion Reconstruction (FEDR), a neural network-based approach for quantifying signal distortions in a relative distortion latent space. Coupled with a Dense network, FEDR fingerprints were evaluated against common RFF techniques for up to 100 unseen classes, where FEDR achieved best performance with MCC ranging from 0.945 (5 classes) to 0.746 (100 classes), using 73% fewer training parameters than the next-best technique

    Novel evaluation framework for sensing spread spectrum in cognitive radio

    Get PDF
    The cognitive radio network is designed to cater to the optimization demands of restricted spectrum availability. A review of existing literature on spectrum sensing shows that there is still a broader scope for its improvement. Therefore, this paper introduces an efficient computational framework capable of evaluating the effectiveness of the spread spectrum concept in the context of cognitive radio network in a more scalable and granular way. The proposed method introduces a dual hypothesis using a different set of dependable parameters to emphasize the detection of optimal energy for a low signal quality state over the noise. The proposed evaluation framework is benchmarked using a statistical analysis method not present in any existing approaches toward spread spectrum sensing. The simulated outcome of the study exhibits that the proposed system offers a significantly better probability of detection than the current system using a simplified evaluation scheme with multiple test parameters

    Data-Driven Approach based on Deep Learning and Probabilistic Models for PHY-Layer Security in AI-enabled Cognitive Radio IoT.

    Get PDF
    PhD Theses.Cognitive Radio Internet of Things (CR-IoT) has revolutionized almost every eld of life and reshaped the technological world. Several tiny devices are seamlessly connected in a CR-IoT network to perform various tasks in many applications. Nevertheless, CR-IoT su ers from malicious attacks that pulverize communication and perturb network performance. Therefore, recently it is envisaged to introduce higher-level Arti cial Intelligence (AI) by incorporating Self-Awareness (SA) capabilities into CR-IoT objects to facilitate CR-IoT networks to establish secure transmission against vicious attacks autonomously. In this context, sub-band information from the Orthogonal Frequency Division Multiplexing (OFDM) modulated transmission in the spectrum has been extracted from the radio device receiver terminal, and a generalized state vector (GS) is formed containing low dimension in-phase and quadrature components. Accordingly, a probabilistic method based on learning a switching Dynamic Bayesian Network (DBN) from OFDM transmission with no abnormalities has been proposed to statistically model signal behaviors inside the CR-IoT spectrum. A Bayesian lter, Markov Jump Particle Filter (MJPF), is implemented to perform state estimation and capture malicious attacks. Subsequently, GS containing a higher number of subcarriers has been investigated. In this connection, Variational autoencoders (VAE) is used as a deep learning technique to extract features from high dimension radio signals into low dimension latent space z, and DBN is learned based on GS containing latent space data. Afterward, to perform state estimation and capture abnormalities in a spectrum, Adapted-Markov Jump Particle Filter (A-MJPF) is deployed. The proposed method can capture anomaly that appears due to either jammer attacks in transmission or cognitive devices in a network experiencing di erent transmission sources that have not been observed previously. The performance is assessed using the receiver
    • …
    corecore