2,114 research outputs found
Composite learning adaptive backstepping control using neural networks with compact supports
© 2019 John Wiley & Sons, Ltd. The ability to learn is crucial for neural network (NN) control as it is able to enhance the overall stability and robustness of control systems. In this study, a composite learning control strategy is proposed for a class of strict-feedback nonlinear systems with mismatched uncertainties, where raised-cosine radial basis function NNs with compact supports are applied to approximate system uncertainties. Both online historical data and instantaneous data are utilized to update NN weights. Practical exponential stability of the closed-loop system is established under a weak excitation condition termed interval excitation. The proposed approach ensures fast parameter convergence, implying an exact estimation of plant uncertainties, without the trajectory of NN inputs being recurrent and the time derivation of plant states. The raised-cosine radial basis function NNs applied not only reduces computational cost but also facilitates the exact determination of a subregressor activated along any trajectory of NN inputs so that the interval excitation condition is verifiable. Numerical results have verified validity and superiority of the proposed approach
A New Approach to Pruning Volterra Models for Power Amplifiers
The objective of this paper is to present an approach to behavioral modeling that can be applied to predict the nonlinear response of power amplifiers with memory. Starting with the discrete-time, complex-baseband full Volterra model, we define a novel methodology that retains only radial branches that can be implemented with one-dimensional finite impulse response filters. This model is subsequently simplified by selecting a subset of directions using an ad hoc procedure. Both models are evaluated in terms of accuracy in the time and frequency domains and complexity, and are compared with other models described in the literature. The evaluation is conducted using a low-voltage silicon RF driver amplifier and a 5-W PA, which are characterized at different levels with diverse modulation formats, including wideband code-division multiple-access (WCDMA) and orthogonal frequency-division multiplexed (OFDM) signals. In all cases, comparison of the measured and simulated responses confirms the effectiveness of the proposed approach.CICYT TEC2008-06259/TECJunta de Andalucía P07-TIC-0264
EMPATH: A Neural Network that Categorizes Facial Expressions
There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain
Recommended from our members
Signal Processing in Wireless Communications: Device Fingerprinting and Wide-Band Interference Rejection
The rapid progress of wireless communication technologies that has taken place in recent years has significantly improved the quality of everyday life. However with this expansion of wireless communication systems come significant security threats and significant technological challenges, both of which are due to the fact that the communication medium is shared. The ubiquity of open wireless Internet access networks creates a new avenue for cyber-criminals to impersonate and act in an unauthorized way. The increasing number of deployed wide-band wireless communication systems entails technological challenges for effective utilization of the shared medium, which implies the need for advanced interference rejection methods. Wireless security and interference rejection in wide-band wireless communications are therefore often considered as the two main challenges in wireless network\u27s design and research. Important aspects of these challenges are illuminated and addressed in this dissertation.
This dissertation considers signal processing approaches for exploiting or mitigating the effects of non-ideal components in wireless communication systems. In the first part of the dissertation, we introduce and study a novel, model-based approach to wireless device identification that exploits imperfections in the transmitter caused by manufacturing process nonidealities. Previous approaches to device identification based on hardware imperfections vary from transient analysis to machine learning but have not provided verifiable accuracy. Here, we detail a model-based approach, that uses statistical models of RF transmitter components: digital-to-analog converter, power amplifier and RF oscillator, which are amenable for analysis. Our proposed approach examines the key device characteristics that cause anonymity loss, countermeasures that can be applied by the nodes to regain the anonymity, and ways of thwarting such countermeasures. We develop identification algorithms based on statistical signal processing methods and address the challenging scenario when the units that need to be distinguished from one another are of the same model and from the same manufacturer. Using simulations and measurements of components that are commonly used in commercial communications systems, we show that our anonymity breaking techniques are effective.
In the second part of the dissertation, we consider innovative approaches for the acquisition of frequency-sparse signals with wide-band receivers when a weak signal of interest is received in the presence of a very strong interference, and the effects of the nonlinearities in the low-noise amplifier at the receiver must be mitigated. All samples with amplitude above a given threshold, dictated by the linear input range of the receiver, are discarded to avoid the distortion caused by saturation of the low noise amplifier. Such a sampling scheme, while avoiding nonlinear distortion that cannot be corrected in the digital domain, poses challenges for signal reconstruction techniques, as the samples are taken non-uniformly, but also non-randomly. The considered approaches fall into the field of compressive sensing (CS); however, what differentiates them from conventional CS is that a structure is forced upon the measurement scheme. Such a structure causes a violation of the core CS assumption of the measurements\u27 randomness. We consider two different types of structured acquisition: signal independent and signal dependent structured acquisition. For the first case, we derive bounds on the number of samples needed for successful CS recovery when samples are drawn at random in predefined groups. For the second case, we consider enhancements of CS recovery methods when only small-amplitude samples of the signal that needs to be recovered are available for the recovery. Finally, we address a problem of spectral leakage due to the limited processing block size of block processing, wide-band receivers and propose an adaptive block size adjustment method, which leads to significant dynamic range improvements
Recommended from our members
Biologically inspired speaker verification
Speaker verification is an active research problem that has been addressed using a variety of different classification techniques. However, in general, methods inspired by the human auditory system tend to show better verification performance than other methods. In this thesis three biologically inspired speaker verification algorithms are presented
Bacterial Foraging Based Channel Equalizers
A channel equalizer is one of the most important subsystems in any digital
communication receiver. It is also the subsystem that consumes maximum computation
time in the receiver. Traditionally maximum-likelihood sequence estimation (MLSE) was
the most popular form of equalizer. Owing to non-stationary characteristics of the
communication channel MLSE receivers perform poorly. Under these circumstances
‘Maximum A-posteriori Probability (MAP)’ receivers also called Bayesian receivers
perform better.
Natural selection tends to eliminate animals with poor “foraging strategies” and favor the
propagation of genes of those animals that have successful foraging strategies since they
are more likely to enjoy reproductive success. After many generations, poor foraging
strategies are either eliminated or shaped into good ones (redesigned). Logically, such
evolutionary principles have led scientists in the field of “foraging theory” to
hypothesize that it is appropriate to model the activity of foraging as an optimization
process.
This thesis presents an investigation on design of bacterial foraging based channel
equalizer for digital communication. Extensive simulation studies shows that the
performance of the proposed receiver is close to optimal receiver for variety of channel
conditions. The proposed receiver also provides near optimal performance when channel
suffers from nonlinearities
Adaptive equalisation for fading digital communication channels
This thesis considers the design of new adaptive equalisers for fading digital communication channels. The role of equalisation is discussed in the context of the functions of a digital radio communication system and both conventional and more recent novel equaliser designs are described. The application of recurrent neural networks to the problem of equalisation is developed from a theoretical study of a single node structure to the design of multinode structures. These neural networks are shown to cancel intersymbol interference in a manner mimicking conventional techniques and simulations demonstrate their sensitivity to symbol estimation errors. In addition the error mechanisms of conventional maximum likelihood equalisers operating on rapidly time-varying channels are investigated and highlight the problems of channel estimation using delayed and often incorrect symbol estimates. The relative sensitivity of Bayesian equalisation techniques to errors in the channel estimate is studied and demonstrates that the structure's equalisation capability is also susceptible to such errors. Applications of multiple channel estimator methods are developed, leading to reduced complexity structures which trade performance for a smaller computational load. These novel structures are shown to provide an improvement over the conventional techniques, especially for rapidly time-varying channels, by reducing the time delay in the channel estimation process. Finally, the use of confidence measures of the equaliser's symbol estimates in order to improve channel estimation is studied and isolates the critical areas in the development of the technique — the production of reliable confidence measures by the equalisers and the statistics of symbol estimation error bursts
Ensemble approach on enhanced compressed noise EEG data signal in wireless body area sensor network
The Wireless Body Area Sensor Network (WBASN) is used for communication among sensor nodes operating on or inside the human body in order to monitor vital body parameters and movements. One of the important applications of WBASN is patients’ healthcare monitoring of chronic diseases such as epileptic seizure. Normally, epileptic seizure data of the electroencephalograph (EEG) is captured and
compressed in order to reduce its transmission time. However, at the same time, this contaminates the overall data and lowers classification accuracy. The current work also did not take into consideration that large size of collected EEG data. Consequently, EEG data is a bandwidth intensive. Hence, the main goal of this work
is to design a unified compression and classification framework for delivery of EEG
data in order to address its large size issue. EEG data is compressed in order to reduce its transmission time. However, at the same time, noise at the receiver side contaminates the overall data and lowers classification accuracy. Another goal is to reconstruct the compressed data and then recognize it. Therefore, a Noise Signal Combination (NSC) technique is proposed for the compression of the transmitted EEG data and enhancement of its classification accuracy at the receiving side in the presence of noise and incomplete data. The proposed framework combines compressive sensing and discrete cosine transform (DCT) in order to reduce the size of transmission data. Moreover, Gaussian noise model of the transmission channel is
practically implemented to the framework. At the receiving side, the proposed NSC is designed based on weighted voting using four classification techniques. The accuracy of these techniques namely Artificial Neural Network, Naïve Bayes, k-Nearest
Neighbour, and Support Victor Machine classifiers is fed to the proposed NSC. The experimental results showed that the proposed technique exceeds the conventional techniques by achieving the highest accuracy for noiseless and noisy data.
Furthermore, the framework performs a significant role in reducing the size of data and classifying both noisy and noiseless data. The key contributions are the unified framework and proposed NSC, which improved accuracy of the noiseless and noisy EGG large data. The results have demonstrated the effectiveness of the proposed
framework and provided several credible benefits including simplicity, and accuracy enhancement. Finally, the research improves clinical information about patients who not only suffer from epilepsy, but also neurological disorders, mental or physiological problems
Dynamic evolving neural fuzzy inference system equalization scheme in mode division multiplexing for optical fiber transmission
The performance of optical mode division multiplexing (MDM) is affected by intersymbol interference (ISI) from nonlinear channel impairments arising from higherorder mode coupling and modal dispersion in multimode fiber. However, the existing MDM equalization algorithms can only mitigate the linear distortion, but they cannot address nonlinear distortion in the signal accurately. Therefore, there is a need to explore how ISI can be mitigated to recover the transmitted signal. This research aims to control the broadening of the MDM signal and minimize the undesirable distortion among channels in MMF by signal reshaping at the receiver. A dynamic evolving neural fuzzy inference system (DENFIS) equalization scheme has been used to achieve this objective. This research was conducted through a few steps commencing with modelling the MDM system in Optsim and collecting the data. Then, the signal reshaping parameters were determined. After that, DENFIS equalization, least mean square (LMS) and recursive least squares (RLS) equalizations were implemented and evaluated. Results illustrated that nonlinear DENFIS equalization scheme can improve MDM signal at a higher accuracy than previous linear equalization schemes. DENFIS
equalization demonstrates better signal reshaping accuracy with an average root mean square error (RMSE) of 0.0338 and outperformed linear LMS and RLS equalization schemes with high average RMSE values of 0.101 and 0.1914 respectively. The
reduced RMSE implies that DENFIS equalization scheme mitigates ISI more effectively in a nonlinear channel. This effect can hasten data transmission rates in MDM. Moreover, the successful offline implementation of DENFIS equalization in MDM encourages future online implementation of DENFIS equalization in embedded
optical systems
Artificial Neural Network Based Channel Equalization
The field of digital data communications has experienced an explosive growth in the last three decade with the growth of internet technologies, high speed and efficient data transmission over communication channel has gained significant importance. The rate of data transmissions over a communication system is limited due to the effects of linear and nonlinear distortion. Linear distortions occure in from of inter-symbol interference (ISI), co-channel interference (CCI) and adjacent channel interference (ACI) in the presence of additive white Gaussian noise. Nonlinear distortions are caused due to the subsystems like amplifiers, modulator and demodulator along with nature of the medium. Some times burst noise occurs in communication system. Different equalization techniques are used to mitigate these effects. Adaptive channel equalizers are used in digital communication systems. The equalizer located at the receiver removes the effects of ISI, CCI, burst noise interference and attempts to recover the transmitted symbols. It has been seen that linear equalizers show poor performance, where as nonlinear equalizer provide superior performance. Artificial neural network based multi layer perceptron (MLP) based equalizers have been used for equalization in the last two decade. The equalizer is a feed-forward network consists of one or more hidden nodes between its input and output layers and is trained by popular error based back propagation (BP) algorithm. However this algorithm suffers from slow convergence rate, depending on the size of network. It has been seen that an optimal equalizer based on maximum a-posterior probability (MAP) criterion can be implemented using Radial basis function (RBF) network. In a RBF equalizer, centres are fixed using K-mean clustering and weights are trained using LMS algorithm. RBF equalizer can mitigate ISI interference effectively providing minimum BER plot. But when the input order is increased the number of centre of the network increases and makes the network more complicated. A RBF network, to mitigate the effects of CCI is very complex with large number of centres.
To overcome computational complexity issues, a single neuron based chebyshev neural network (ChNN) and functional link ANN (FLANN) have been proposed. These neural networks are single layer network in which the original input pattern is expanded to a higher dimensional space using nonlinear functions and have capability to provide arbitrarily complex decision regions.
More recently, a rank based statistics approach known as Wilcoxon learning method has been proposed for signal processing application. The Wilcoxon learning algorithm has been applied to neural networks like Wilcoxon Multilayer Perceptron Neural Network (WMLPNN), Wilcoxon Generalized Radial Basis Function Network (WGRBF). The Wilcoxon approach provides promising methodology for many machine learning problems. This motivated us to introduce these networks in the field of channel equalization application. In this thesis we have used WMLPNN and WGRBF network to mitigate ISI, CCI and burst noise interference. It is observed that the equalizers trained with Wilcoxon learning algorithm offers improved performance in terms of convergence characteristic and bit error rate performance in comparison to gradient based training for MLP and RBF. Extensive simulation studies have been carried out to validate the proposed technique. The performance of Wilcoxon networks is better then linear equalizers trained with LMS and RLS algorithm and RBF equalizer in the case of burst noise and CCI mitigations
- …