6 research outputs found

    Adaptive Channel Equalization using Radial Basis Function Networks and MLP

    Get PDF
    One of the major practical problems in digital communication systems is channel distortion which causes errors due to intersymbol interference. Since the source signal is in general broadband, the various frequency components experience different steady state amplitude and phase changes as they pass through the channel, causing distortion in the received message. This distortion translates into errors in the received sequence. Our problem as communication engineers is to restore the transmitted sequence or, equivalently, to identify the inverse of the channel, given the observed sequence at the channel output. This task is accomplished by adaptive equalizers. Typically, adaptive equalizers used in digital communications require an initial training period, during which a known data sequence is transmitted. A replica of this sequence is made available at the receiver in proper synchronism with the transmitter, thereby making it possible for adjustments to be made to the equalizer coefficients in accordance with the adaptive filtering algorithm employed in the equalizer design. When the training is completed, the equalizer is switched to its decision directed mode. Decision feedback equalizers are used extensively in practical communication systems. They are more powerful than linear equalizers especially for severe inter-symbol interference (ISI) channels without as much noise enhancement as the linear equalizers. This thesis addresses the problem of adaptive channel equalization in environments where the interfering noise exhibits Gaussian behavior. In this thesis, radial basis function (RBF) network is used to implement DFE. Advantages and problems of this system are discussed and its results are then compared with DFE using multi layer perceptron net (MLP).Results indicate that the implemented system outperforms both the least-mean square(LMS) algorithm and MLP, given the same signal-to-noise ratio as it offers minimum mean square error. The learning rate of the implemented system is also faster than both LMS and the multilayered case

    Nonlinear parameter estimation in classification problems

    No full text
    A nonlinear generalisation of the perceptron learning algorithm is presented and analysed. The new algorithm is designed for learning nonlinearly parametrised decision regions. It is shown that this algorithm can be viewed as a stepwise gradient descent of a certain cost function. Averaging theory is used to describe the behaviour of the algorithm, and in the process conditions guaranteeing convergence of the algorithm are established. These conditions are hard to test, so some simpler sufficient are derived using the directional derivative of the instantaneous cost. A number of simulation examples and applications are given, showing the variety of situations in which the algorithm can be used. In the initial analysis, a great deal of a priori knowledge about the decision region to be learnt has been assumed-in particular, it is assumed that the decision region is parametrised by some known (nonlinear) function. Often in applications, a general class of decision regions must be assumed, in which case the best approximate from the class is sought. It is shown that function approximation results can be used to derive degree of approximation results for decision regions. The approximating classes of decision regions considered are described by polynomial and neural network parametrisations. One shortcoming of all gradient descent type algorithms, such as the online learning algorithm discussed in the first part of this thesis, is that estimates may be attracted to local minima of the cost function. This is a problem because local minima occur in many interesting cases. Therefore a modified version of the algorithm, which avoids local minima traps, is presented. In the new algorithm, a number of parameter estimates ( called a congregation) are kept at any one time, and periodically all but the best estimate are restarted. Convergence of the new algorithm is established using the averaging theory that was used for the first algorithm. A probabilistic result concerning the expected time to convergence of the algorithm is given, and the effect of different population sizes is investigated. Again, a number of simulation examples are presented, including the application to the CMA algorithm for blind equalisation

    Epileptic Seizures and the EEG

    Get PDF
    A study of epilepsy from an engineering perspective, this volume begins by summarizing the physiology and the fundamental ideas behind the measurement, analysis and modeling of the epileptic brain. It introduces the EEG and provides an explanation of the type of brain activity likely to register in EEG measurements, offering an overview of how these EEG records are and have been analyzed in the past. The book focuses on the problem of seizure detection and surveys the physiologically based dynamic models of brain activity. Finally, it addresses the fundamental question: can seizures be predicted? Based on the authors' extensive research, the book concludes by exploring a range of future possibilities in seizure prediction

    Epileptic Seizures and the EEG

    Get PDF
    A study of epilepsy from an engineering perspective, this volume begins by summarizing the physiology and the fundamental ideas behind the measurement, analysis and modeling of the epileptic brain. It introduces the EEG and provides an explanation of the type of brain activity likely to register in EEG measurements, offering an overview of how these EEG records are and have been analyzed in the past. The book focuses on the problem of seizure detection and surveys the physiologically based dynamic models of brain activity. Finally, it addresses the fundamental question: can seizures be predicted? Based on the authors' extensive research, the book concludes by exploring a range of future possibilities in seizure prediction

    Online Learning via Congregational Gradient Descent

    No full text
    We propose and analyse a populational version of stepwise gradient descent suitable for a wide range of learning problems. The algorithm is motivated by genetic algorithms which update a population of solutions rather than just a single representative asistypical for gradient descent. This modi cation of traditional gradient descent (as used for example in the backpropagation algorithm) avoids getting trapped in local minima. We use an averaging analysis of the algorithm to relate its behaviour to an associated ordinary di erential equation. We derivea result concerning how long one has to wait in order that with a given high probability, the algorithm is within a certain neighbourhood of the global minimum. We also analyse the e ect of di erent population sizes. An example is presented which corroborates our theory very well.

    Online Learning via Congregational Gradient Descent

    No full text
    We propose and analyse a populational version of stepwise gradient descent suitable for a wide range of learning problems. The algorithm is motivated by genetic algorithms which update a population of solutions rather than just a single representative as is typical for gradient descent. This modification of traditional gradient descent (as used, for example, in the backpropogation algorithm) avoids getting trapped in local minima. We use an averaging analysis of the algorithm to relate its behaviour to an associated ordinary differential equation. We derive a result concerning how long one has to wait in order that, with a given high probability, the algorithm is within a certain neighbourhood of the global minimum. We also analyse the effect of different population sizes. An example is presented which corroborates our theory very well
    corecore