37,322 research outputs found

    On generalized adaptive neural filter

    Get PDF
    Linear filters have historically been used in the past as the most useful tools for suppressing noise in signal processing. It has been shown that the optimal filter which minimizes the mean square error (MSE) between the filter output and the desired output is a linear filter provided that the noise is additive white Gaussian noise (AWGN). However, in most signal processing applications, the noise in the channel through which a signal is transmitted is not AWGN; it is not stationary, and it may have unknown characteristics. To overcome the shortcomings of linear filters, nonlinear filters ranging from the median filters to stack filters have been developed. They have been successfully used in a number of applications, such as enhancing the signal-to-noise ratio of the telecommunication receivers, modeling the human vocal tract to synthesize speech in speech processing, and separating out the maternal and fetal electrocardiogram signals to diagnose prenatal ailments. In particular, stack filters have been shown to provide robust noise suppression, and are easily implementable in hardware, but configuring an optimal stack filter remains a challenge. This dissertation takes on this challenge by extending stack filters to a new class of nonlinear adaptive filters called generalized adaptive neural filters (GANFs). The objective of this work is to investigate their performance in terms of the mean absolute error criterion, to evaluate and predict the generalization of various discriminant functions employed for GANFs, and to address issues regarding their applications and implementation. It is shown that GANFs not only extend the class of stack filters, but also have better performance in terms of suppressing non-additive white Gaussian noise. Several results are drawn from the theoretical and experimental work: stack filters can be adaptively configured by neural networks; GANFs encompass a large class of nonlinear sliding-window filters which include stack filters; the mean absolute error (MAE) of the optimal GANF is upper-bounded by that of the optimal stack filter; a suitable class of discriminant functions can be determined before a training scheme is executed; VC dimension (VCdim) theory can be applied to determine the number of training samples; the algorithm presented in configuring GANFs is effective and robust

    Recent advances on filtering and control for nonlinear stochastic complex systems with incomplete information: A survey

    Get PDF
    This Article is provided by the Brunel Open Access Publishing Fund - Copyright @ 2012 Hindawi PublishingSome recent advances on the filtering and control problems for nonlinear stochastic complex systems with incomplete information are surveyed. The incomplete information under consideration mainly includes missing measurements, randomly varying sensor delays, signal quantization, sensor saturations, and signal sampling. With such incomplete information, the developments on various filtering and control issues are reviewed in great detail. In particular, the addressed nonlinear stochastic complex systems are so comprehensive that they include conventional nonlinear stochastic systems, different kinds of complex networks, and a large class of sensor networks. The corresponding filtering and control technologies for such nonlinear stochastic complex systems are then discussed. Subsequently, some latest results on the filtering and control problems for the complex systems with incomplete information are given. Finally, conclusions are drawn and several possible future research directions are pointed out.This work was supported in part by the National Natural Science Foundation of China under Grant nos. 61134009, 61104125, 61028008, 61174136, 60974030, and 61074129, the Qing Lan Project of Jiangsu Province of China, the Project sponsored by SRF for ROCS of SEM of China, the Engineering and Physical Sciences Research Council EPSRC of the UK under Grant GR/S27658/01, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany

    Training Methods for Shunting Inhibitory Artificial Neural Networks

    Get PDF
    This project investigates a new class of high-order neural networks called shunting inhibitory artificial neural networks (SIANN\u27s) and their training methods. SIANN\u27s are biologically inspired neural networks whose dynamics are governed by a set of coupled nonlinear differential equations. The interactions among neurons are mediated via a nonlinear mechanism called shunting inhibition, which allows the neurons to operate as adaptive nonlinear filters. The project\u27s main objective is to devise training methods, based on error backpropagation type of algorithms, which would allow SIANNs to be trained to perform feature extraction for classification and nonlinear regression tasks. The training algorithms developed will simplify the task of designing complex, powerful neural networks for applications in pattern recognition, image processing, signal processing, machine vision and control. The five training methods adapted in this project for SIANN\u27s are error-backpropagation based on gradient descent (GD), gradient descent with variable learning rate (GDV), gradient descent with momentum (GDM), gradient descent with direct solution step (GDD) and APOLEX algorithm. SIANN\u27s and these training methods are implemented in MATLAB. Testing on several benchmarks including the parity problems, classification of 2-D patterns, and function approximation shows that SIANN\u27s trained using these methods yield comparable or better performance with multilayer perceptrons (MLP\u27s)

    Universal discrete-time reservoir computers with stochastic inputs and linear readouts using non-homogeneous state-affine systems

    Full text link
    A new class of non-homogeneous state-affine systems is introduced for use in reservoir computing. Sufficient conditions are identified that guarantee first, that the associated reservoir computers with linear readouts are causal, time-invariant, and satisfy the fading memory property and second, that a subset of this class is universal in the category of fading memory filters with stochastic almost surely uniformly bounded inputs. This means that any discrete-time filter that satisfies the fading memory property with random inputs of that type can be uniformly approximated by elements in the non-homogeneous state-affine family.Comment: 41 page

    Particle-filtering approaches for nonlinear Bayesian decoding of neuronal spike trains

    Full text link
    The number of neurons that can be simultaneously recorded doubles every seven years. This ever increasing number of recorded neurons opens up the possibility to address new questions and extract higher dimensional stimuli from the recordings. Modeling neural spike trains as point processes, this task of extracting dynamical signals from spike trains is commonly set in the context of nonlinear filtering theory. Particle filter methods relying on importance weights are generic algorithms that solve the filtering task numerically, but exhibit a serious drawback when the problem dimensionality is high: they are known to suffer from the 'curse of dimensionality' (COD), i.e. the number of particles required for a certain performance scales exponentially with the observable dimensions. Here, we first briefly review the theory on filtering with point process observations in continuous time. Based on this theory, we investigate both analytically and numerically the reason for the COD of weighted particle filtering approaches: Similarly to particle filtering with continuous-time observations, the COD with point-process observations is due to the decay of effective number of particles, an effect that is stronger when the number of observable dimensions increases. Given the success of unweighted particle filtering approaches in overcoming the COD for continuous- time observations, we introduce an unweighted particle filter for point-process observations, the spike-based Neural Particle Filter (sNPF), and show that it exhibits a similar favorable scaling as the number of dimensions grows. Further, we derive rules for the parameters of the sNPF from a maximum likelihood approach learning. We finally employ a simple decoding task to illustrate the capabilities of the sNPF and to highlight one possible future application of our inference and learning algorithm
    corecore