49 research outputs found

    Wireless Channel Equalization in Digital Communication Systems

    Get PDF
    Our modern society has transformed to an information-demanding system, seeking voice, video, and data in quantities that could not be imagined even a decade ago. The mobility of communicators has added more challenges. One of the new challenges is to conceive highly reliable and fast communication system unaffected by the problems caused in the multipath fading wireless channels. Our quest is to remove one of the obstacles in the way of achieving ultimately fast and reliable wireless digital communication, namely Inter-Symbol Interference (ISI), the intensity of which makes the channel noise inconsequential. The theoretical background for wireless channels modeling and adaptive signal processing are covered in first two chapters of dissertation. The approach of this thesis is not based on one methodology but several algorithms and configurations that are proposed and examined to fight the ISI problem. There are two main categories of channel equalization techniques, supervised (training) and blind unsupervised (blind) modes. We have studied the application of a new and specially modified neural network requiring very short training period for the proper channel equalization in supervised mode. The promising performance in the graphs for this network is presented in chapter 4. For blind modes two distinctive methodologies are presented and studied. Chapter 3 covers the concept of multiple cooperative algorithms for the cases of two and three cooperative algorithms. The select absolutely larger equalized signal and majority vote methods have been used in 2-and 3-algoirithm systems respectively. Many of the demonstrated results are encouraging for further research. Chapter 5 involves the application of general concept of simulated annealing in blind mode equalization. A limited strategy of constant annealing noise is experimented for testing the simple algorithms used in multiple systems. Convergence to local stationary points of the cost function in parameter space is clearly demonstrated and that justifies the use of additional noise. The capability of the adding the random noise to release the algorithm from the local traps is established in several cases

    On the Development of Distributed Estimation Techniques for Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) have lately witnessed tremendous demand, as evidenced by the increasing number of day-to-day applications. The sensor nodes aim at estimating the parameters of their corresponding adaptive filters to achieve the desired response for the event of interest. Some of the burning issues related to linear parameter estimation in WSNs have been addressed in this thesis mainly focusing on reduction of communication overhead and latency, and robustness to noise. The first issue deals with the high communication overhead and latency in distributed parameter estimation techniques such as diffusion least mean squares (DLMS) and incremental least mean squares (ILMS) algorithms. Subsequently the poor performance demonstrated by these distributed techniques in presence of impulsive noise has been dealt separately. The issue of source localization i.e. estimation of source bearing in WSNs, where the existing decentralized algorithms fail to perform satisfactorily, has been resolved in this thesis. Further the same issue has been dealt separately independent of nodal connectivity in WSNs. This thesis proposes two algorithms namely the block diffusion least mean squares (BDLMS) and block incremental least mean squares (BILMS) algorithms for reducing the communication overhead in WSNs. The theoretical and simulation studies demonstrate that BDLMS and BILMS algorithms provide the same performances as that of DLMS and ILMS, but with significant reduction in communication overheads per node. The latency also reduces by a factor as high as the block-size used in the proposed algorithms. With an aim to develop robustness towards impulsive noise, this thesis proposes three robust distributed algorithms i.e. saturation nonlinearity incremental LMS (SNILMS), saturation nonlinearity diffusion LMS (SNDLMS) and Wilcoxon norm diffusion LMS (WNDLMS) algorithms. The steady-state analysis of SNILMS algorithm is carried out based on spatial-temporal energy conservation principle. The theoretical and simulation results show that these algorithms are robust to impulsive noise. The SNDLMS algorithm is found to provide better performance than SNILMS and WNDLMS algorithms. In order to develop a distributed source localization technique, a novel diffusion maximum likelihood (ML) bearing estimation algorithm is proposed in this thesis which needs less communication overhead than the centralized algorithms. After forming a random array with its neighbours, each sensor node estimates the source bearing by optimizing the ML function locally using a diffusion particle swarm optimization algorithm. The simulation results show that the proposed algorithm performs better than the centralized multiple signal classification (MUSIC) algorithm in terms of probability of resolution and root mean square error. Further, in order to make the proposed algorithm independent of nodal connectivity, a distributed in-cluster bearing estimation technique is proposed. Each cluster of sensors estimates the source bearing by optimizing the ML function locally in cooperation with other clusters. The simulation results demonstrate improved performance of the proposed method in comparison to the centralized and decentralized MUSIC algorithms, and the distributed in-network algorith

    Quasi-Newton Least Mean Fourth Adaptive Algorithm

    Get PDF

    Quasi-Newton Least Mean Fourth Adaptive Algorithm

    Get PDF

    IIR modeling of interpositional transfer functions with a genetic algorithm aided by an adaptive filter for the purpose of altering free-field sound localization

    Get PDF
    The psychoacoustic process of sound localization is a system of complex analysis. Scientists have found evidence that both binaural and monaural cues are responsible for determining the angles of elevation and azimuth which represent a sound source. Engineers have successfully used these cues to build mathematical localization systems. Research has indicated that spectral cues play an important role in 3-d localization. Therefore, it seems conceivable to design a filtering system which can alter the localization of a sound source, either for correctional purposes or listener preference. Such filters, known as Interpositional Transfer Functions, can be formed from division in the z-domain of Head-related Transfer Functions. HRTF’s represent the free-field response of the human body to sound processed by the ears. In filtering applications, the use of IIR filters is often favored over that of FIR filters due to their preservation of resolution while minimizing the number of required coefficients. Several methods exist for creating IIR filters from their representative FIR counterparts. For complicated filters, genetic algorithms (GAs) have proven effective. The research summarized in this thesis combines the past efforts of researchers in the fields of sound localization, genetic algorithms, and adaptive filtering. It represents the initial stage in the development of a practical system for future hardware implementation which uses a genetic algorithm as a driving engine. Under ideal conditions, an IIR filter design system has been demonstrated to successfully model several IPTF pairs which alter sound localization when applied to non-minimum phase HRTF’s obtained from free-field measurement

    Efficient channel equalization algorithms for multicarrier communication systems

    Get PDF
    Blind adaptive algorithm that updates time-domain equalizer (TEQ) coefficients by Adjacent Lag Auto-correlation Minimization (ALAM) is proposed to shorten the channel for multicarrier modulation (MCM) systems. ALAM is an addition to the family of several existing correlation based algorithms that can achieve similar or better performance to existing algorithms with lower complexity. This is achieved by designing a cost function without the sum-square and utilizing symmetrical-TEQ property to reduce the complexity of adaptation of TEQ to half of the existing one. Furthermore, to avoid the limitations of lower unstable bit rate and high complexity, an adaptive TEQ using equal-taps constraints (ETC) is introduced to maximize the bit rate with the lowest complexity. An IP core is developed for the low-complexity ALAM (LALAM) algorithm to be implemented on an FPGA. This implementation is extended to include the implementation of the moving average (MA) estimate for the ALAM algorithm referred as ALAM-MA. Unit-tap constraint (UTC) is used instead of unit-norm constraint (UNC) while updating the adaptive algorithm to avoid all zero solution for the TEQ taps. The IP core is implemented on Xilinx Vertix II Pro XC2VP7-FF672-5 for ADSL receivers and the gate level simulation guaranteed successful operation at a maximum frequency of 27 MHz and 38 MHz for ALAM-MA and LALAM algorithm, respectively. FEQ equalizer is used, after channel shortening using TEQ, to recover distorted QAM signals due to channel effects. A new analytical learning based framework is proposed to jointly solve equalization and symbol detection problems in orthogonal frequency division multiplexing (OFDM) systems with QAM signals. The framework utilizes extreme learning machine (ELM) to achieve fast training, high performance, and low error rates. The proposed framework performs in real-domain by transforming a complex signal into a single 2–tuple real-valued vector. Such transformation offers equalization in real domain with minimum computational load and high accuracy. Simulation results show that the proposed framework outperforms other learning based equalizers in terms of symbol error rates and training speeds

    CONNECTIONIST SPEECH RECOGNITION - A Hybrid Approach

    Get PDF

    Learning classifier systems from first principles: A probabilistic reformulation of learning classifier systems from the perspective of machine learning

    Get PDF
    Learning Classifier Systems (LCS) are a family of rule-based machine learning methods. They aim at the autonomous production of potentially human readable results that are the most compact generalised representation whilst also maintaining high predictive accuracy, with a wide range of application areas, such as autonomous robotics, economics, and multi-agent systems. Their design is mainly approached heuristically and, even though their performance is competitive in regression and classification tasks, they do not meet their expected performance in sequential decision tasks despite being initially designed for such tasks. It is out contention that improvement is hindered by a lack of theoretical understanding of their underlying mechanisms and dynamics.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Applications of neural networks to control systems

    Get PDF
    Tese de dout., Engenharia Electrónica, School of Electronic Engineering Science, Univ. of Wales, Bangor, 1992This work investigates the applicability of artificial neural networks to control systems. The following properties of neural networks are identified as of major interest to this field: their ability to implement nonlinear mappings, their massively parallel structure and their capacity to adapt. Exploiting the first feature, a new method is proposed for PID autotuning. Based on integral measures of the open or closed loop step response, multilayer perceptrons (MLPs) are used to supply PID parameter values to a standard PID controller. Before being used on-line, the MLPs are trained offline, to provide PID parameter values based on integral performance criteria. Off-line simulations, where a plant with time-varying parameters and time varying transfer function is considered, show that well damped responses are obtained. The neural PID autotuner is subsequently implemented in real-time. Extensive experimentation confirms the good results obtained in the off-line simulations. To reduce the training time incurred when using the error back-propagation algorithm, three possibilities are investigated. A comparative study of higherorder methods of optimization identifies the Levenberg-Marquardt (LM)algorithm as the best method. When used for function approximation purposes, the neurons in the output layer of the MLPs have a linear activation function. Exploiting this linearity, the standard training criterion can be replaced by a new, yet equivalent, criterion. Using the LM algorithm to minimize this new criterion, together with an alternative form of Jacobian matrix, a new learning algorithm is obtained. This algorithm is subsequently parallelized. Its main blocks of computation are identified, separately parallelized, and finally connected together. The training time of MLPs is reduced by a factor greater than 70 executing the new learning algorithm on 7 Inmos transputers
    corecore