1,232 research outputs found

    Preprint: Using RF-DNA Fingerprints To Classify OFDM Transmitters Under Rayleigh Fading Conditions

    Full text link
    The Internet of Things (IoT) is a collection of Internet connected devices capable of interacting with the physical world and computer systems. It is estimated that the IoT will consist of approximately fifty billion devices by the year 2020. In addition to the sheer numbers, the need for IoT security is exacerbated by the fact that many of the edge devices employ weak to no encryption of the communication link. It has been estimated that almost 70% of IoT devices use no form of encryption. Previous research has suggested the use of Specific Emitter Identification (SEI), a physical layer technique, as a means of augmenting bit-level security mechanism such as encryption. The work presented here integrates a Nelder-Mead based approach for estimating the Rayleigh fading channel coefficients prior to the SEI approach known as RF-DNA fingerprinting. The performance of this estimator is assessed for degrading signal-to-noise ratio and compared with least square and minimum mean squared error channel estimators. Additionally, this work presents classification results using RF-DNA fingerprints that were extracted from received signals that have undergone Rayleigh fading channel correction using Minimum Mean Squared Error (MMSE) equalization. This work also performs radio discrimination using RF-DNA fingerprints generated from the normalized magnitude-squared and phase response of Gabor coefficients as well as two classifiers. Discrimination of four 802.11a Wi-Fi radios achieves an average percent correct classification of 90% or better for signal-to-noise ratios of 18 and 21 dB or greater using a Rayleigh fading channel comprised of two and five paths, respectively.Comment: 13 pages, 14 total figures/images, Currently under review by the IEEE Transactions on Information Forensics and Securit

    Joint optimization of transceivers with fractionally spaced equalizers

    Get PDF
    In this paper we propose a method for joint optimization of transceivers with fractionally spaced equalization (FSE). We use the effective single-input multiple-output (SIMO) model for the fractionally spaced receiver. Since the FSE is used at the receiver, the optimized precoding scheme should be changed correspondingly. Simulation shows that the proposed method demonstrates remarkable improvement for jointly optimal linear transceivers as well as transceivers with decision feedback

    Personal area technologies for internetworked services

    Get PDF

    Single-carrier frequency-domain equalization with hybrid decision feedback equalizer for Hammerstein channels containing nonlinear transmit amplifier

    Get PDF
    We propose a nonlinear hybrid decision feedback equalizer (NHDFE) for single-carrier (SC) block transmission systems with nonlinear transmit high power amplifier (HPA), which significantly outperforms our previous nonlinear SC frequency-domain equalization (NFDE) design. To obtain the coefficients of the channel impulse response (CIR) as well as to estimate the nonlinear mapping and the inverse nonlinear mapping of the HPA, we adopt a complex-valued (CV) B-spline neural network approach. Specifically, we use a CV B-spline neural network to model the nonlinear HPA, and we develop an efficient alternating least squares scheme for estimating the parameters of the Hammerstein channel, including both the CIR coefficients and the parameters of the CV B-spline model. We also adopt another CV B-spline neural network to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can be estimated using the least squares algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. The effectiveness of our NHDFE design is demonstrated in a simulation study, which shows that the NHDFE achieves a signal-to-noise ratio gain of 4dB over the NFDE at the bit error rate level of 10−4

    Optics for AI and AI for Optics

    Get PDF
    Artificial intelligence is deeply involved in our daily lives via reinforcing the digital transformation of modern economies and infrastructure. It relies on powerful computing clusters, which face bottlenecks of power consumption for both data transmission and intensive computing. Meanwhile, optics (especially optical communications, which underpin today’s telecommunications) is penetrating short-reach connections down to the chip level, thus meeting with AI technology and creating numerous opportunities. This book is about the marriage of optics and AI and how each part can benefit from the other. Optics facilitates on-chip neural networks based on fast optical computing and energy-efficient interconnects and communications. On the other hand, AI enables efficient tools to address the challenges of today’s optical communication networks, which behave in an increasingly complex manner. The book collects contributions from pioneering researchers from both academy and industry to discuss the challenges and solutions in each of the respective fields

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions
    corecore