422,265 research outputs found

    Adaptive processing with signal contaminated training samples

    Get PDF
    We consider the adaptive beamforming or adaptive detection problem in the case of signal contaminated training samples, i.e., when the latter may contain a signal-like component. Since this results in a significant degradation of the signal to interference and noise ratio at the output of the adaptive filter, we investigate a scheme to jointly detect the contaminated samples and subsequently take this information into account for estimation of the disturbance covariance matrix. Towards this end, a Bayesian model is proposed, parameterized by binary variables indicating the presence/absence of signal-like components in the training samples. These variables, together with the signal amplitudes and the disturbance covariance matrix are jointly estimated using a minimum mean-square error (MMSE) approach. Two strategies are proposed to implement the MMSE estimator. First, a stochastic Markov Chain Monte Carlo method is presented based on Gibbs sampling. Then a computationally more efficient scheme based on variational Bayesian analysis is proposed. Numerical simulations attest to the improvement achieved by this method compared to conventional methods such as diagonal loading. A successful application to real radar data is also presented

    Adaptive digital signal processing Java teaching tool

    Get PDF
    This publication presents a JAVA program for teaching the rudiments of adaptive digital signal processing (DSP) algorithms and techniques. Adaptive DSP is on of the most important areas of signal processsing, and provides the core algorithmic means to implement applications ranging from mobile telephone speech coding, to noise cancellation, to communication channel equalization. Over the last 30 years adaptive digital signal processing has progressed from being a strictly graduate level advanced class in signal processing theory to a topic that is part of the core curriculum for many undergraduate signal processing classes. The JAVA applet presented in this publication has been devised for students to use in combination with lecture notes and/or one of the recognised textbooks such that they can quickly and conveniently simulate algorithms such as the LMS (least mean squares), RLS (recursive least squares) and so on in a variety of applications without requiring to write programs or scripts or using any special purpose software. By the very nature of the JAVA code therefore, the applet can be run from any browser, even over a low bandwidth modem connection

    NONLINEAR ADAPTIVE SIGNAL PROCESSING

    Get PDF
    Nonlinear techniques for signal processing and recognition have the promise of achieving systems which are superior to linear systems in a number of ways such as better performance in terms of accuracy, fault tolerance, resolution, highly parallel architectures and cloker similarity to biological intelligent systems. The nonlinear techniques proposed are in the form of multistage neural networks in which each stage can be a particular neural network and all the stages operate in parallel. The specific approach focused upon is the parallel, self-organizing, hierarchical neural networks (PSHNN\u27s). A new type of PSHNN is discussed such that the outputs are allowed to be continuous-valued. The perfo:rmance of the resulting networks is tested in problems of prediction of speech and of chaotic time-series. Three types of networks in which the stages are learned by the delta rule, sequential least-squares, and the backpropagation (BP) algolrithm, respectively, are described. In all cases studied, the new networks achieve better performarnce than linear prediction. This is shown both theoretically and experimentally. A revised BP algorithm is discussed for learning input nonlinearities. The advantage of the revised BP algorithm is that the PSHNN with revised BP stages can be extended to use the sequential leastsquares (SLS) or the least mean absolule value rule (LMAV) in the last stage. A forward-backward training algorithm for parallel, self-organizing hierarchical neural networks is described. Using linear algebra, it is shown that the forward-backward training of an n-stage PSHNN until convergence is equivalent to the pseudo-inverse solution for a single, total network designed in the leastsquares sense with the total input vector consisting of the actual input vector and its additional nonlinear transformations. These results are also valid when a single long input vector is partitioned into smaller length vectors. A number of advantages achieved are small modules for easy and fast learning, parallel implementation of small modules during testing, faster convergence rate, better numerical error-reduction, and suitability for learning input nonlinear transformations by the backpropagation algorithm. Better performance in terms of deeper minimum of the error function and faster convergence rate is achieved when a single BP network is replaced by a PSHNN of equal complexity in which each stage is a BP network of smaller complexity than the single BP network

    Adaptive Graph Signal Processing: Algorithms and Optimal Sampling Strategies

    Full text link
    The goal of this paper is to propose novel strategies for adaptive learning of signals defined over graphs, which are observed over a (randomly time-varying) subset of vertices. We recast two classical adaptive algorithms in the graph signal processing framework, namely, the least mean squares (LMS) and the recursive least squares (RLS) adaptive estimation strategies. For both methods, a detailed mean-square analysis illustrates the effect of random sampling on the adaptive reconstruction capability and the steady-state performance. Then, several probabilistic sampling strategies are proposed to design the sampling probability at each node in the graph, with the aim of optimizing the tradeoff between steady-state performance, graph sampling rate, and convergence rate of the adaptive algorithms. Finally, a distributed RLS strategy is derived and is shown to be convergent to its centralized counterpart. Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed sampling and reconstruction strategies for (possibly distributed) adaptive learning of signals defined over graphs.Comment: Submitted to IEEE Transactions on Signal Processing, September 201

    study of adaptive signal processing

    Get PDF
    An adaptive filter is a digital filter that can adjust its coefficients to give the best match t An adaptive filter is a digital filter that can adjust its coefficients to give the best match to a given desired signal. When an adaptive filter operates in a changeable environment the filter coefficients can adapt in response to changes in the applied input signals. Adaptive filters depend on recursive algorithms to update their coefficients and train them to near the optimum solution. An everyday example of adaptive filters is in the telephone system where, impedance mismatches causing echoes of a signal are a significant source of annoyance to the users of the system. The adaptive signal process is here to estimate and generate the echo path and compensate for it. To do this the echo path is viewed as an unknown system with some impulse response and the adaptive filter must mimic this response. Adaptive Filters are generally implemented in the time domain which works well in most scenarios however in many applications the impulse response become long, and increasing the complexity of the filter beyond a level where it can no longer be implemented efficiently in the time domain. An example of acoustic echo cancellation applications is in hands free telephony system. However there exists an alternative solution and that is to implement the filters in the frequency domain. The Discrete Fourier Transform or Fast Fourier Transform (FFT) allows the conversion of signals from the time domain to the frequency domain in an efficient manner. Despite the efficiency of the FFT the overhead involved in converting the signals to the frequency domain does place a restriction on the use of the algorithm. When the impulse response of the unknown system and hence the impulse response of the filter is long enough however this is not an issue since the computational cost of the conversion is much less than that of the time domain algorithm. The actual filtering of the signals requires little computational cost in the frequency domain. Investigation of the so-called crossover point, the point where the frequency domain implementation becomes more efficient than the time domain implementation is important to establish the point where frequency domain implementation becomes practica

    Adaptive DCTNet for Audio Signal Classification

    Full text link
    In this paper, we investigate DCTNet for audio signal classification. Its output feature is related to Cohen's class of time-frequency distributions. We introduce the use of adaptive DCTNet (A-DCTNet) for audio signals feature extraction. The A-DCTNet applies the idea of constant-Q transform, with its center frequencies of filterbanks geometrically spaced. The A-DCTNet is adaptive to different acoustic scales, and it can better capture low frequency acoustic information that is sensitive to human audio perception than features such as Mel-frequency spectral coefficients (MFSC). We use features extracted by the A-DCTNet as input for classifiers. Experimental results show that the A-DCTNet and Recurrent Neural Networks (RNN) achieve state-of-the-art performance in bird song classification rate, and improve artist identification accuracy in music data. They demonstrate A-DCTNet's applicability to signal processing problems.Comment: International Conference of Acoustic and Speech Signal Processing (ICASSP). New Orleans, United States, March, 201
    corecore