58,533 research outputs found
NEURAL COMPUTATION APPROACH FOR THE MAXIMUM-LIKELIHOOD SEQUENCE ESTIMATION OF COMMUNICATIONS SIGNAL
Abstract: A novel detection approach for signals in digital communications is proposed in this paper by using the neural network with transiently chaos and time-variant gain (NNTCTG) developed by author. The maximum likelihood signal detection problem can be always described as a complex optimization problem with so many local optima that conventional Hopfield-type neural networks cannot be applied. To amend the drawbacks of Hopfield-type networks, the NNTCTG is used to search for globally optimal or near-optimal solutions of the optimization problems with lots of local optima since it has richer and more flexible dynamics over conventional networks only with point attractors. We established a neuro-based detection model for the signal in digital communication and analyzed its working procedure in detail. Two simulation experiments were conducted to illustrate the validity and effectiveness of the proposed approach
Deep Predictive Coding Neural Network for RF Anomaly Detection in Wireless Networks
Intrusion detection has become one of the most critical tasks in a wireless
network to prevent service outages that can take long to fix. The sheer variety
of anomalous events necessitates adopting cognitive anomaly detection methods
instead of the traditional signature-based detection techniques. This paper
proposes an anomaly detection methodology for wireless systems that is based on
monitoring and analyzing radio frequency (RF) spectrum activities. Our
detection technique leverages an existing solution for the video prediction
problem, and uses it on image sequences generated from monitoring the wireless
spectrum. The deep predictive coding network is trained with images
corresponding to the normal behavior of the system, and whenever there is an
anomaly, its detection is triggered by the deviation between the actual and
predicted behavior. For our analysis, we use the images generated from the
time-frequency spectrograms and spectral correlation functions of the received
RF signal. We test our technique on a dataset which contains anomalies such as
jamming, chirping of transmitters, spectrum hijacking, and node failure, and
evaluate its performance using standard classifier metrics: detection ratio,
and false alarm rate. Simulation results demonstrate that the proposed
methodology effectively detects many unforeseen anomalous events in real time.
We discuss the applications, which encompass industrial IoT, autonomous vehicle
control and mission-critical communications services.Comment: 7 pages, 7 figures, Communications Workshop ICC'1
Bit error performance of diffuse indoor optical wireless channel pulse position modulation system employing artificial neural networks for channel equalisation
The bit-error rate (BER) performance of a pulse position modulation (PPM) scheme for non-line-of-sight indoor optical links employing channel equalisation based on the artificial neural network (ANN) is reported. Channel equalisation is achieved by training a multilayer perceptrons ANN. A comparative study of the unequalised `soft' decision decoding and the `hard' decision decoding along with the neural equalised `soft' decision decoding is presented for different bit resolutions for optical channels with different delay spread. We show that the unequalised `hard' decision decoding performs the worst for all values of normalised delayed spread, becoming impractical beyond a normalised delayed spread of 0.6. However, `soft' decision decoding with/without equalisation displays relatively improved performance for all values of the delay spread. The study shows that for a highly diffuse channel, the signal-to-noise ratio requirement to achieve a BER of 10ĂąËâ5 for the ANN-based equaliser is ~10 dB lower compared with the unequalised `soft' decoding for 16-PPM at a data rate of 155 Mbps. Our results indicate that for all range of delay spread, neural network equalisation is an effective tool of mitigating the inter-symbol interference
Symmetric complex-valued RBF receiver for multiple-antenna aided wireless systems
A nonlinear beamforming assisted detector is proposed for multiple-antenna-aided wireless systems employing complex-valued quadrature phase shift-keying modulation. By exploiting the inherent symmetry of the optimal Bayesian detection solution, a novel complex-valued symmetric radial basis function (SRBF)-network-based detector is developed, which is capable of approaching the optimal Bayesian performance using channel-impaired training data. In the uplink case, adaptive nonlinear beamforming can be efficiently implemented by estimating the systemâs channel matrix based on the least squares channel estimate. Adaptive implementation of nonlinear beamforming in the downlink case by contrast is much more challenging, and we adopt a cluster-variationenhanced clustering algorithm to directly identify the SRBF center vectors required for realizing the optimal Bayesian detector. A simulation example is included to demonstrate the achievable performance improvement by the proposed adaptive nonlinear beamforming solution over the theoretical linear minimum bit error rate beamforming benchmark
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Machine learning for fiber nonlinearity mitigation in long-haul coherent optical transmission systems
Fiber nonlinearities from Kerr effect are considered as major constraints for enhancing the transmission capacity in current optical transmission systems. Digital nonlinearity compensation techniques such as digital backpropagation can perform well but require high computing resources. Machine learning can provide a low complexity capability especially for high-dimensional classification problems. Recently several supervised and unsupervised machine learning techniques have been investigated in the field of fiber nonlinearity mitigation. This paper offers a brief review of the principles, performance and complexity of these machine learning approaches in the application of nonlinearity mitigation
- âŠ