97 research outputs found
Multi-stage Wireless Signal Identification for Blind Interception Receiver Design
Protection of critical wireless infrastructure from malicious attacks has become increasingly important in recent years, with the widespread deployment of various wireless technologies and dramatic growth in user populations. This brings substantial technical challenges to the interception receiver design to sense and identify various wireless signals using different transmission technologies. The key requirements for the receiver design include estimation of the signal parameters/features and classification of the modulation scheme. With the proper identification results, corresponding signal interception techniques can be developed, which can be further employed to enhance the network behaviour analysis and intrusion detection.
In detail, the initial stage of the blind interception receiver design is to identify the signal parameters. In the thesis, two low-complexity approaches are provided to realize the parameter estimation, which are based on iterative cyclostationary analysis and envelope spectrum estimation, respectively. With the estimated signal parameters, automatic modulation classification (AMC) is performed to automatically identify the modulation schemes of the transmitted signals. A novel approach is presented based on Gaussian Mixture Models (GMM) in Chapter 4. The approach is capable of mitigating the negative effect from multipath fading channel. To validate the proposed design, the performance is evaluated under an experimental propagation environment. The results show that the proposed design is capable of adapting blind parameter estimation, realize timing and frequency synchronization and classifying the modulation schemes with improved performances
Recommended from our members
Array Architectures and Physical Layer Design for Millimeter-Wave Communications Beyond 5G
Ever increasing demands in mobile data rates have resulted in exploration of millimeter-wave (mmW) frequencies for the next generation (5G) wireless networks. Communications at mmW frequencies is presented with two keys challenges. Firstly, high propagation loss requires base stations (BSs) and user equipment (UEs) to use a large number of antennas and narrow beams to close the link with sufficient received signal power. Consequently, communications using narrow beams create a new challenge in channel estimation and link establishment based on fine angular probing. Current mmW system use analog phased arrays that can probe only one angle at the time which results in high latency during link establishment and channel tracking. It is desirable to design low latency beam training by exploring both physical layer designs and array architectures that could replace current 5G approaches and pave the way to the communications for frequency bands in higher mmW band and sub-THz region where larger antenna arrays and communications bandwidth can be exploited. To this end, we propose a novel signal processing techniques exploiting unique properties of mmW channel, and show both theoretically, in simulation and experiments its advantages over conventional approaches. Secondly, we explore different array architecture design and analyze their trade-offs between spectral efficiency and power consumption and area. For comprehensive comparison, we have developed a methodology for optimal design of system parameters for different array architecture candidates based on the spectral efficiency target, and use these parameters to estimate the array area and power consumption based on the circuits reported in the literature. We show that the hybrid analog and digital architectures have severe scalability concerns in radio frequency signal distribution with increased array size and spatial multiplexing levels, while the fully-digital array architectures have the best performance and power/area trade-offs.The developed approaches are based on a cross-disciplinary research that combines innovation in model based signal processing, machine learning, and radio hardware. This work is the first to apply compressive sensing (CS), a signal processing tool that exploits sparsity of mmW channel model, to accelerate beam training of mmW cellular system. The algorithm is designed to address practical issues including the requirement of cell discovery and synchronization that involves estimation of angular channel together with carrier frequency offset and timing offsets. We have analyzed the algorithm performance in the 5G compliant simulation and showed that an order of magnitude saving is achieved in initial access latency for the desired channel estimation accuracy. Moreover, we are the first to develop and implement a neural network assisted compressive beam alignment to deal with hardware impairments in mmW radios. We have used 60GHz mmW testbed to perform experiments and show that neural networks approach enhances alignment rate compared to CS. To further accelerate beam training, we proposed a novel frequency selective probing beams using the true-time-delay (TTD) analog array architecture. Our approach utilizes different subcarriers to scan different directions, and achieves a single-shot beam alignment, the fastest approach reported to date. Our comprehensive analysis of different array architectures and exploration of emerging architectures enabled us to develop an order of magnitude faster and energy efficient approaches for initial access and channel estimation in mmW systems
Recommended from our members
Millimeter wave link configuration with hybrid MIMO architectures
The use of multiple antennas, widely known as MIMO technology, is a key feature to deploy mmWave communication systems enabling high-data-rate applications. With more than two decades of global experience in deploying Wi-Fi and cellular communication using sub-6 GHz frequency bands, simply repurposing these designs for mmWave bands would fail to account for additional propagation impairments and circuit design constraints at these higher frequencies. A solution to overcome the propagation challenges is the use of multiple directional communication beams, whereby proper alignment between transceivers provides sufficient link quality to enable reliable decoding of the transmitted data.
In this dissertation, efficient link configuration solutions suitable for mmWave cellular communications are developed. To gain some insight into the achievable performance of mmWave systems, two broadband channel-estimation-based link configuration solutions are proposed for MIMO-OFDM systems, in which both the transmitter and receiver are assumed to be perfectly synchronized. The proposed solution exploits the spatially common sparsity in the mmWave channel and enables efficient acquisition of the CSI while allowing the use of multiple RF chains on both the transmitter and receiver sides. In a simplified scenario, the CRLB for the channel estimation problem is derived, and the proposed channel estimation algorithms are shown to both outperform prior work in communication performance and exhibit excellent estimation performance. Furthermore, the proposed algorithms are assessed in a more challenging scenario with realistic channel parameters, and it is shown that both near-optimal spectral efficiency and low BER can be attained with lower overhead and computational complexity than prior solutions.
Next, the impact of imperfect CFO synchronization on the channel estimation problem is analyzed under a narrowband channel model. The CRLB for the estimation of the different unknown parameters involved in the problem is theoretically analyzed, and closed-form expressions are provided for the estimation of the different parameters. Under a joint estimation-theoretic and CS framework, a low-complexity multi-stage solution is proposed to estimate both the different unknown synchronization parameters and the large-dimensional mmWave MIMO channel. Different trade-offs between estimation, spectral efficiency, and overhead performance are exposed, and the proposed estimators are shown to be asymptotically optimal in the low SNR regime. The proposed solution is assessed under a channel model with several clusters and rays per cluster, and is shown to attain near-optimal spectral efficiency values in both the low and high SNR regimes. The computational complexity of the proposed solution is also analyzed, in which it is shown to achieve a marginal increase in computational complexity with respect to the solution proposed in the previous contribution.
Finally, the impact of TO, CFO, and PN impairments on the channel estimation problem is analyzed under a broadband channel model. The problem of time-frequency synchronization under PN impairments is theoretically analyzed, and the proposed solutions to the synchronization problem are exploited to estimate the frequency-selective mmWave MIMO channel. The hybrid CRLB for the estimation of the different synchronization impairments is analyzed, and closed-form expressions leveraging the information coupling between the different impairments are provided. The previously proposed joint estimation-theoretic and CS framework is extended to frequency-selective scenarios, and two low-complexity multi-stage solutions are proposed to estimate both the different synchronization impairments and the large-dimensional mmWave MIMO channel. The first solution relies on a batch-processing LMMSE-based EM algorithm to estimate the different synchronization impairments, while the second solution uses a sequential-processing EKF-RTS-based EM algorithm, thereby reducing computational complexity. Thereafter, both the hybrid CRLB for the estimation of the equivalent beamformed complex channels and the estimates for these parameters are exploited to estimate the large-dimensional frequency-selective mmWave MIMO channel. Finally, a joint PN and data detection algorithm is proposed for data transmission under the 5G NR frame structure. The proposed solutions are evaluated using a 5G NR-based channel model, and different trade-offs between estimation performance, computational complexity, overhead, achievable spectral efficiency and BER are exposed, and comparisons with prior work are also provided. The results show that mmWave link configuration using hybrid MIMO architectures can be established with low overhead without assuming synchronization, even in the low SNR regime.Electrical and Computer Engineerin
CELLULAR-ENABLED MACHINE TYPE COMMUNICATIONS: RECENT TECHNOLOGIES AND COGNITIVE RADIO APPROACHES
The scarcity of bandwidth has always been the main obstacle for providing reliable high data-rate wireless links, which are in great demand to accommodate nowadays and immediate future wireless applications. In addition, recent reports have showed inefficient usage and under-utilization of the available bandwidth. Cognitive radio (CR) has recently emerged as a promising solution to enhance the spectrum utilization, where it offers the ability for unlicensed users to access the licensed spectrum opportunistically. By allowing opportunistic spectrum access which is the main concept for the interweave network model, the overall spectrum utilization can be improved. This requires cognitive radio networks (CRNs) to consider the spectrum sensing and monitoring as an essential enabling process for the interweave network model.
Machine-to-machine (M2M) communication, which is the basic enabler for the Internet-of-Things (IoT), has emerged to be a key element in future networks. Machines are expected to communicate with each other exchanging information and data without human intervention. The ultimate objective of M2M communications is to construct comprehensive connections among all machines distributed over an extensive coverage area. Due to the radical change in the number of users, the network has to carefully utilize the available resources in order to maintain reasonable quality-of-service (QoS). Generally, one of the most important resources in wireless communications is the frequency spectrum. To utilize the frequency spectrum in IoT environment, it can be argued that cognitive radio concept is a possible solution from the cost and performance perspectives. Thus, supporting numerous number of machines is possible by employing dual-mode base stations which can apply cognitive radio concept in addition to the legacy licensed frequency assignment.
In this thesis, a detailed review of the state of the art related to the application of spectrum sensing in CR communications is considered. We present the latest advances related to the implementation of the legacy spectrum sensing approaches. We also address the implementation challenges for cognitive radios in the direction of spectrum sensing and monitoring. We propose a novel algorithm to solve the reduced throughput issue due to the scheduled spectrum sensing and monitoring. Further, two new architectures are considered to significantly reduce the power consumption required by the CR to enable wideband sensing. Both systems rely on the 1-bit quantization at the receiver side. The system performance is analytically investigated and simulated. Also, complexity and power consumption are investigated and studied.
Furthermore, we address the challenges that are expected from the next generation M2M network as an integral part of the future IoT. This mainly includes the design of low-power low-cost machine with reduced bandwidth. The trade-off between cost, feasibility, and performance are also discussed. Because of the relaxation of the frequency and spatial diversities, in addition, to enabling the extended coverage mode, initial synchronization and cell search have new challenges for cellular-enabled M2M systems. We study conventional solutions with their pros and cons including timing acquisition, cell detection, and frequency offset estimation algorithms. We provide a technique to enhance the performance in the presence of the harsh detection environment for LTE-based machines. Furthermore, we present a frequency tracking algorithm for cellular M2M systems that utilizes the new repetitive feature of the broadcast channel symbols in next generation Long Term Evolution (LTE) systems. In the direction of narrowband IoT support, we propose a cell search and initial synchronization algorithm that utilizes the new set of narrowband synchronization signals. The proposed algorithms have been simulated at very low signal to noise ratios and in different fading environments
Automatic Modulation Classification Using Cyclic Features via Compressed Sensing
Cognitive Radios (CRs) are designed to operate with minimal interference to the Primary User (PU), the incumbent to a radio spectrum band. To ensure that the interference generated does not exceed a specific level, an estimate of the Signal to Interference plus Noise Ratio (SINR) for the PUâs channel is required. This can be accomplished through determining the modulation scheme in use, as it is directly correlated with the SINR. To this end, an Automatic Modulation Classification (AMC) scheme is developed via cyclic feature detection that is successful even with signal bandwidths that exceed the sampling rate of the CR. In order to accomplish this, Compressed Sensing (CS) is applied, allowing for reconstruction, even with very few samples. The use of CS in spectrum sensing and interpretation is becoming necessary for a growing number of scenarios where the radio spectrum band of interest cannot be fully measured, such as low cost sensor networks, or high bandwidth radio localization services.
In order to be able to classify a wide range of modulation types, cumulants were chosen as the feature to use. They are robust to noise and provide adequate discrimination between different types of modulation, even those that are fairly similar, such as 16-QAM and 64-QAM. By fusing cumulants and CS, a novel method of classification was developed which inherited the noise resilience of cumulants, and the low sample requirements of CS. Comparisons are drawn between the proposed method and existing ones, both in terms of accuracy and resource usages. The proposed method is shown to perform similarly when many samples are gathered, and shows improvement over existing methods at lower sample counts. It also uses less resources, and is able to produce an estimate faster than the current systems
Underwater localization and node mobility estimation
In this paper, localizing a moving node in the context of underwater wireless sensor networks (UWSNs) is considered. Most existing algorithms have had designed to work with a static node in the networks. However, in practical case, the node is dynamic due to relative motion between the transmitter and receiver. The main idea is to record the time of arrival message (ToA) stamp and estimating the drift in the sampling frequency accordingly. It should be emphasized that, the channel conditions such as multipath and delay spread, and ambient noise is considered to make the system pragmatic. A joint prediction of the node mobility and speed are estimated based on the sampling frequency offset estimation. This sampling frequency offset drift is detected based on correlating an anticipated window in the orthogonal frequency division multiplexing (OFDM) of the received packet. The range and the distance of the mobile node is predicted from estimating the speed at the received packet and reused in the position estimation algorithm. The underwater acoustic channel is considered in this paper with 8 paths and maximum delay spread of 48Â ms to simulate a pragmatic case. The performance is evaluated by adopting different nodes speeds in the simulation in two scenarios of expansion and compression. The results show that the proposed algorithm has a stable profile in the presence of severe channel conditions. Also, the result shows that the maximum speed that can be adopted in this algorithm is 9 km/h and the expansion case profile is more stable than the compression scenario. In addition, a comparison with a dynamic triangular algorithm (DTN) is presented in order to evaluate the proposed system
RAPID: Retrofitting IEEE 802.11ay Access Points for Indoor Human Detection and Sensing
In this work we present RAPID, a joint communication and radar (JCR) system
based on next-generation IEEE 802.11ay WiFi networks operating in the 60 GHz
band. In contrast to most existing approaches for human sensing at
millimeter-waves, which employ special-purpose radars to retrieve the
small-scale Doppler effect (micro-Doppler) caused by human motion, RAPID
achieves radar-level sensing accuracy by retrofitting IEEE 802.11ay access
points. For this, it leverages the IEEE 802.11ay beam training mechanism to
accurately localize and track multiple individuals, while the in-packet beam
tracking fields are exploited to extract the desired micro-Doppler signatures
from the time-varying phase of the channel impulse response (CIR). The proposed
approach enables activity recognition and person identification with IEEE
802.11ay wireless networks without requiring modifications to the packet
structure specified by the standard. RAPID is implemented on an IEEE
802.11ay-compatible FPGA platform with phased antenna arrays, which estimates
the CIR from the reflections of transmitted packets. The proposed system is
evaluated on a large dataset of CIR measurements, proving robustness across
different environments and subjects, and outperforming state-of-the-art sub-6
GHz WiFi sensing techniques. Using two access points, RAPID reliably tracks
multiple subjects, reaching activity recognition and person identification
accuracies of 94% and 90%, respectively.Comment: 16 pages, 18 figures, 4 table
- âŠ