8,274 research outputs found

    A Survey of Blind Modulation Classification Techniques for OFDM Signals

    Get PDF
    Blind modulation classification (MC) is an integral part of designing an adaptive or intelligent transceiver for future wireless communications. Blind MC has several applications in the adaptive and automated systems of sixth generation (6G) communications to improve spectral efficiency and power efficiency, and reduce latency. It will become a integral part of intelligent software-defined radios (SDR) for future communication. In this paper, we provide various MC techniques for orthogonal frequency division multiplexing (OFDM) signals in a systematic way. We focus on the most widely used statistical and machine learning (ML) models and emphasize their advantages and limitations. The statistical-based blind MC includes likelihood-based (LB), maximum a posteriori (MAP) and feature-based methods (FB). The ML-based automated MC includes k-nearest neighbors (KNN), support vector machine (SVM), decision trees (DTs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory (LSTM) based MC methods. This survey will help the reader to understand the main characteristics of each technique, their advantages and disadvantages. We have also simulated some primary methods, i.e., statistical- and ML-based algorithms, under various constraints, which allows a fair comparison among different methodologies. The overall system performance in terms bit error rate (BER) in the presence of MC is also provided. We also provide a survey of some practical experiment works carried out through National Instrument hardware over an indoor propagation environment. In the end, open problems and possible directions for blind MC research are briefly discussed

    Temporal phase unwrapping using deep learning

    Full text link
    The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection profilometry (FPP), is capable of eliminating the phase ambiguities even in the presence of surface discontinuities or spatially isolated objects. For the simplest and most efficient case, two sets of 3-step phase-shifting fringe patterns are used: the high-frequency one is for 3D measurement and the unit-frequency one is for unwrapping the phase obtained from the high-frequency pattern set. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that the phase can be successfully unwrapped without triggering the fringe order error. Consequently, in order to guarantee a reasonable unwrapping success rate, the fringe number (or period number) of the high-frequency fringe patterns is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. Inspired by recent successes of deep learning techniques for computer vision and computational imaging, in this work, we report that the deep neural networks can learn to perform TPU after appropriate training, as called deep-learning based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even in the presence of different types of error sources, e.g., intensity noise, low fringe modulation, and projector nonlinearity. We further experimentally demonstrate for the first time, to our knowledge, that the high-frequency phase obtained from 64-period 3-step phase-shifting fringe patterns can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU

    Joint 1D and 2D Neural Networks for Automatic Modulation Recognition

    Get PDF
    The digital communication and radar community has recently manifested more interest in using data-driven approaches for tasks such as modulation recognition, channel estimation and distortion correction. In this research we seek to apply an object detector for parameter estimation to perform waveform separation in the time and frequency domain prior to classification. This enables the full automation of detecting and classifying simultaneously occurring waveforms. We leverage a lD ResNet implemented by O\u27Shea et al. in [1] and the YOLO v3 object detector designed by Redmon et al. in [2]. We conducted an in depth study of the performance of these architectures and integrated the models to perform joint detection and classification. To our knowledge, the present research is the first to study and successfully combine a lD ResNet classifier and Yolo v3 object detector to fully automate the process of AMR for parameter estimation, pulse extraction and waveform classification for non-cooperative scenarios. The overall performance of the joint detector/ classifier is 90 at 10 dB signal to noise ratio for 24 digital and analog modulations
    • …
    corecore