55 research outputs found

    A Study on the Suitability of Genetic Algorithm for Adaptive Channel Equalization

    Get PDF
    Adaptive algorithms such as Least-Mean-Square (LMS) based channel equalizer aim to minimize the Intersymbol Interference (ISI) present in the transmission channel. However the adaptive algorithms suffer from long training time and undesirable local minima during training mode. These disadvantages of the adaptive algorithms for channel equalization have been discussed in the literature. In this paper, we propose a new adaptive channel equalizer using Genetic Algorithm (GA) which is essentially a derivative free optimization tool. This algorithm is suitably used to update the weights of the equalizer. The performance of the proposed channel equalizer is evaluated in terms of mean square error (MSE) and convergence rate and is compared with its LMS and RLS counter parts. It is observed that the new adaptive equalizer based GA offer improved performance so far as the accuracy of reception is concerned.DOI:http://dx.doi.org/10.11591/ijece.v2i3.31

    Bacterial Foraging Based Channel Equalizers

    Get PDF
    A channel equalizer is one of the most important subsystems in any digital communication receiver. It is also the subsystem that consumes maximum computation time in the receiver. Traditionally maximum-likelihood sequence estimation (MLSE) was the most popular form of equalizer. Owing to non-stationary characteristics of the communication channel MLSE receivers perform poorly. Under these circumstances ‘Maximum A-posteriori Probability (MAP)’ receivers also called Bayesian receivers perform better. Natural selection tends to eliminate animals with poor “foraging strategies” and favor the propagation of genes of those animals that have successful foraging strategies since they are more likely to enjoy reproductive success. After many generations, poor foraging strategies are either eliminated or shaped into good ones (redesigned). Logically, such evolutionary principles have led scientists in the field of “foraging theory” to hypothesize that it is appropriate to model the activity of foraging as an optimization process. This thesis presents an investigation on design of bacterial foraging based channel equalizer for digital communication. Extensive simulation studies shows that the performance of the proposed receiver is close to optimal receiver for variety of channel conditions. The proposed receiver also provides near optimal performance when channel suffers from nonlinearities

    Artificial Neural Network Based Channel Equalization

    Get PDF
    The field of digital data communications has experienced an explosive growth in the last three decade with the growth of internet technologies, high speed and efficient data transmission over communication channel has gained significant importance. The rate of data transmissions over a communication system is limited due to the effects of linear and nonlinear distortion. Linear distortions occure in from of inter-symbol interference (ISI), co-channel interference (CCI) and adjacent channel interference (ACI) in the presence of additive white Gaussian noise. Nonlinear distortions are caused due to the subsystems like amplifiers, modulator and demodulator along with nature of the medium. Some times burst noise occurs in communication system. Different equalization techniques are used to mitigate these effects. Adaptive channel equalizers are used in digital communication systems. The equalizer located at the receiver removes the effects of ISI, CCI, burst noise interference and attempts to recover the transmitted symbols. It has been seen that linear equalizers show poor performance, where as nonlinear equalizer provide superior performance. Artificial neural network based multi layer perceptron (MLP) based equalizers have been used for equalization in the last two decade. The equalizer is a feed-forward network consists of one or more hidden nodes between its input and output layers and is trained by popular error based back propagation (BP) algorithm. However this algorithm suffers from slow convergence rate, depending on the size of network. It has been seen that an optimal equalizer based on maximum a-posterior probability (MAP) criterion can be implemented using Radial basis function (RBF) network. In a RBF equalizer, centres are fixed using K-mean clustering and weights are trained using LMS algorithm. RBF equalizer can mitigate ISI interference effectively providing minimum BER plot. But when the input order is increased the number of centre of the network increases and makes the network more complicated. A RBF network, to mitigate the effects of CCI is very complex with large number of centres. To overcome computational complexity issues, a single neuron based chebyshev neural network (ChNN) and functional link ANN (FLANN) have been proposed. These neural networks are single layer network in which the original input pattern is expanded to a higher dimensional space using nonlinear functions and have capability to provide arbitrarily complex decision regions. More recently, a rank based statistics approach known as Wilcoxon learning method has been proposed for signal processing application. The Wilcoxon learning algorithm has been applied to neural networks like Wilcoxon Multilayer Perceptron Neural Network (WMLPNN), Wilcoxon Generalized Radial Basis Function Network (WGRBF). The Wilcoxon approach provides promising methodology for many machine learning problems. This motivated us to introduce these networks in the field of channel equalization application. In this thesis we have used WMLPNN and WGRBF network to mitigate ISI, CCI and burst noise interference. It is observed that the equalizers trained with Wilcoxon learning algorithm offers improved performance in terms of convergence characteristic and bit error rate performance in comparison to gradient based training for MLP and RBF. Extensive simulation studies have been carried out to validate the proposed technique. The performance of Wilcoxon networks is better then linear equalizers trained with LMS and RLS algorithm and RBF equalizer in the case of burst noise and CCI mitigations

    Adaptive Control

    Get PDF
    Adaptive control has been a remarkable field for industrial and academic research since 1950s. Since more and more adaptive algorithms are applied in various control applications, it is becoming very important for practical implementation. As it can be confirmed from the increasing number of conferences and journals on adaptive control topics, it is certain that the adaptive control is a significant guidance for technology development.The authors the chapters in this book are professionals in their areas and their recent research results are presented in this book which will also provide new ideas for improved performance of various control application problems

    Development Of Novel Neuro-Fuzzy Techniques For Adaptive Systems

    Get PDF
    Novel approaches for designing adaptive schemes based on neuro-fuzzy platform have been developed. Two kinds of adaptive schemes namely, adaptive equalization and system identification are implemented using the developed proposed techniques. The Radial basis function (RBF) equalizer is chosen as a case study for adaptive equalization of the digital communication channels. An efficient method for reducing the centers of a RBF equalizer based on eigenvalue analysis is presented. The efficiency of the method is further verified for RBF equalizers with decision feedback for tackling channels with overlapping channel states. A comparative study between the proposed center reduction technique and other center reduction techniques for the RBF equalizer is discussed. In another breakthrough a parallel interpretation of the ANFIS (adaptive network based fuzzy inference systems) architecture is proposed. This approach helps to investigate the role of the fuzzy inference part and the s..

    Adaptive equalisation for fading digital communication channels

    Get PDF
    This thesis considers the design of new adaptive equalisers for fading digital communication channels. The role of equalisation is discussed in the context of the functions of a digital radio communication system and both conventional and more recent novel equaliser designs are described. The application of recurrent neural networks to the problem of equalisation is developed from a theoretical study of a single node structure to the design of multinode structures. These neural networks are shown to cancel intersymbol interference in a manner mimicking conventional techniques and simulations demonstrate their sensitivity to symbol estimation errors. In addition the error mechanisms of conventional maximum likelihood equalisers operating on rapidly time-varying channels are investigated and highlight the problems of channel estimation using delayed and often incorrect symbol estimates. The relative sensitivity of Bayesian equalisation techniques to errors in the channel estimate is studied and demonstrates that the structure's equalisation capability is also susceptible to such errors. Applications of multiple channel estimator methods are developed, leading to reduced complexity structures which trade performance for a smaller computational load. These novel structures are shown to provide an improvement over the conventional techniques, especially for rapidly time-varying channels, by reducing the time delay in the channel estimation process. Finally, the use of confidence measures of the equaliser's symbol estimates in order to improve channel estimation is studied and isolates the critical areas in the development of the technique — the production of reliable confidence measures by the equalisers and the statistics of symbol estimation error bursts

    Machine Learning Techniques To Mitigate Nonlinear Impairments In Optical Fiber System

    Get PDF
    The upcoming deployment of 5/6G networks, online services like 4k/8k HDTV (streamers and online games), the development of the Internet of Things concept, connecting billions of active devices, as well as the high-speed optical access networks, impose progressively higher and higher requirements on the underlying optical networks infrastructure. With current network infrastructures approaching almost unsustainable levels of bandwidth utilization/ data traffic rates, and the electrical power consumption of communications systems becoming a serious concern in view of our achieving the global carbon footprint targets, network operators and system suppliers are now looking for ways to respond to these demands while also maximizing the returns of their investments. The search for a solution to this predicted ªcapacity crunchº led to a renewed interest in alternative approaches to system design, including the usage of high-order modulation formats and high symbol rates, enabled by coherent detection, development of wideband transmission tools, new fiber types (such as multi-mode and ±core), and finally, the implementation of advanced digital signal processing (DSP) elements to mitigate optical channel nonlinearities and improve the received SNR. All aforementioned options are intended to boost the available optical systems’ capacity to fulfill the new traffic demands. This thesis focuses on the last of these possible solutions to the ªcapacity crunch," answering the question: ªHow can machine learning improve existing optical communications by minimizing quality penalties introduced by transceiver components and fiber media nonlinearity?". Ultimately, by identifying a proper machine learning solution (or a bevy of solutions) to act as a nonlinear channel equalizer for optical transmissions, we can improve the system’s throughput and even reduce the signal processing complexity, which means we can transmit more using the already built optical infrastructure. This problem was broken into four parts in this thesis: i) the development of new machine learning architectures to achieve appealing levels of performance; ii) the correct assessment of computational complexity and hardware realization; iii) the application of AI techniques to achieve fast reconfigurable solutions; iv) the creation of a theoretical foundation with studies demonstrating the caveats and pitfalls of machine learning methods used for optical channel equalization. Common measures such as bit error rate, quality factor, and mutual information are considered in scrutinizing the systems studied in this thesis. Based on simulation and experimental results, we conclude that neural network-based equalization can, in fact, improve the channel quality of transmission and at the same time have computational complexity close to other classic DSP algorithms

    ADAPTIVE MODELS-BASED CARDIAC SIGNALS ANALYSIS AND FEATURE EXTRACTION

    Get PDF
    Signal modeling and feature extraction are among the most crucial and important steps for stochastic signal processing. In this thesis, a general framework that employs adaptive model-based recursive Bayesian state estimation for signal processing and feature extraction is described. As a case study, the proposed framework is studied for the problem of cardiac signal analysis. The main objective is to improve the signal processing aspects of cardiac signals by developing new techniques based on adaptive modelling of electrocardiogram (ECG) wave-forms. Specially several novel and improved approaches to model-based ECG decomposition, waveform characterization and feature extraction are proposed and studied in detail. In the concept of ECG decomposition and wave-forms characterization, the main idea is to extend and improve the signal dynamical models (i.e. reducing the non-linearity of the state model with respect to previous solutions) while combining with Kalman smoother to increase the accuracy of the model in order to split the ECG signal into its waveform components, as it is proved that Kalman filter/smoother is an optimal estimator in minimum mean square error (MMSE) for linear dynamical systems. The framework is used for many real applications, such as: ECG components extraction, ST segment analysis (estimation of a possible marker of ventricular repolarization known as T/QRS ratio) and T-wave Alternans (TWA) detection, and its extension to many other applications is straightforward. Based on the proposed framework, a novel model to characterization of Atrial Fibrillation (AF) is presented which is more effective when compared with other methods proposed with the same aims. In this model, ventricular activity (VA) is represented by a sum of Gaussian kernels, while a sinusoidal model is employed for atrial activity (AA). This new model is able to track AA, VA and fibrillatory frequency simultaneously against other methods which try to analyze the atrial fibrillatory waves (f-waves) after VA cancellation. Furthermore we study a new ECG processing method for assessing the spatial dispersion of ventricular repolarization (SHVR) using V-index and a novel algorithm to estimate the index is presented, leading to more accurate estimates. The proposed algorithm was used to study the diagnostic and prognostic value of the V-index in patients with symptoms suggestive of Acute Myocardial Infraction (AMI)
    corecore