719 research outputs found

    A study on adaptive filtering for noise and echo cancellation.

    Get PDF
    The objective of this thesis is to investigate the adaptive filtering technique on the application of noise and echo cancellation. As a relatively new area in Digital Signal Processing (DSP), adaptive filters have gained a lot of popularity in the past several decades due to the advantages that they can deal with time-varying digital system and they do not require a priori knowledge of the statistics of the information to be processed. Adaptive filters have been successfully applied in a great many areas such as communications, speech processing, image processing, and noise/echo cancellation. Since Bernard Widrow and his colleagues introduced adaptive filter in the 1960s, many researchers have been working on noise/echo cancellation by using adaptive filters with different algorithms. Among these algorithms, normalized least mean square (NLMS) provides an efficient and robust approach, in which the model parameters are obtained on the base of mean square error (MSE). The choice of a structure for the adaptive filters also plays an important role on the performance of the algorithm as a whole. For this purpose, two different filter structures: finite impulse response (FIR) filter and infinite impulse response (IIR) filter have been studied. The adaptive processes with two kinds of filter structures and the aforementioned algorithm have been implemented and simulated using Matlab.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .J53. Source: Masters Abstracts International, Volume: 44-01, page: 0472. Thesis (M.A.Sc.)--University of Windsor (Canada), 2005

    On issues of equalization with the decorrelation algorithm : fast converging structures and finite-precision

    Get PDF
    To increase the rate of convergence of the blind, adaptive, decision feedback equalizer based on the decorrelation criterion, structures have been proposed which dramatically increase the complexity of the equalizer. The complexity of an algorithm has a direct bearing on the cost of implementing the algorithm in either hardware or software. In this thesis, more computationally efficient structures, based on the fast transversal filter and lattice algorithms, are proposed for the decorrelation algorithm which maintain the high rate of convergence of the more complex algorithms. Furthermore, the performance of the decorrelation algorithm in a finite-precision environment will be studied and compared to the widely used LMS algorithm

    Cognitive Radar Detection in Nonstationary Environments and Target Tracking

    Get PDF
    Target detection and tracking are the most fundamental and important problems in a wide variety of defense and civilian radar systems. In recent years, to cope with complex environments and stealthy targets, the concept of cognitive radars has been proposed to integrate intelligent modules into conventional radar systems. To achieve better performance, cognitive radars are designed to sense, learn from, and adapt to environments. In this dissertation, we introduce cognitive radars for target detection in nonstationary environments and cognitive radar networks for target tracking.For target detection, many algorithms in the literature assume a stationary environment (clutter). However, in practical scenarios, changes in the nonstationary environment can perturb the parameters of the clutter distribution or even alter the clutter distribution family, which can greatly deteriorate the target detection capability. To avoid such potential performance degradation, cognitive radar systems are envisioned which can rapidly recognize the nonstationarity, accurately learn the new characteristics of the environment, and adaptively update the detector. To achieve this cognition, we propose a unifying framework that integrates three functions: (i) change-point detection of clutter distributions by using a data-driven cumulative sum (CUSUM) algorithm and its extended version, (ii) learning/identification of clutter distribution by using kernel density estimation (KDE) methods and similarity measures (iii) adaptive target detection by automatically modifying the likelihood-ratio test and the corresponding detection threshold. We also conduct extensive numerical experiments to show the merits of the proposed method compared to a nonadaptive case, an adaptive matched filter (AMF) method, and the clairvoyant case.For target tracking, with remarkable advances in sensor techniques and deployable platforms, a sensing system has freedom to select a subset of available radars, plan their trajectories, and transmit designed waveforms. Accordingly, we propose a general framework for single target tracking in cognitive networks of radars, including joint consideration of waveform design, path planning, and radar selection. We formulate the tracking procedure using the theories of dynamic graphical models (DGM) and recursive Bayesian state estimation (RBSE). This procedure includes two iterative steps: (i) solving a combinatorial optimization problem to select the optimal subset of radars, waveforms, and locations for the next tracking instant, and (ii) acquiring the recursive Bayesian state estimation to accurately track the target. Further, we use an illustrative example to introduce a specific scenario in 2-D space. Simulation results based on this scenario demonstrate that the proposed framework can accurately track the target under the management of a network of radars

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Adaptive techniques for signal enhancement in the human electroencephalogram

    Get PDF
    This thesis describes an investigation of adaptive noise cancelling applied to human brain evoked potentials (EPs), with particular emphasis on visually evoked responses. The chief morphological features and signal properties of EPs are described. Consideration is given to the amplitude and spectral properties of the underlying spontaneous electroencephalogram and the importance of noise reduction techniques in EP studies is empnasised. A number of methods of enhancing EP waveforms are reviewed in the light of the known limitations of coherent signal averaging. These are shown to oe generally inadequate for enhancing individual EP responses. The theory of adaptive filters is reviewed with particular reference to adaptive transversal filters usiny the Widrow-Hoff algorithm. The theory of adaptive noise cancelling using correlated reference sources is presented, and new work is described which relates canceller performance to the magnitude-squared coherence function of the input signals. A novel filter structure, the gated adaptive filter, is presented and shown to yield improved cancellation without signal distortion when applied to repetitive transient signals in stationary noise under the condition of fast adaption. The signal processing software available is shown to be inadequate, and a comprehensive Fortran program developed for use on a PDP-11 computer is described. The properties of human visual evoked potentials and the EEO are investigated in two normal adults using a montage of 7 occipital electrodes. Signal enhancement of EPs is shown to be possible oy adaptive noise cancelling, and improvements in signal to noise in the range 2-10 dB are predicted. A discussion of filter strategies is presented, and a detailed investiyation of adaptive noise cancel liny performed usiny a ranye of typical EP data. Assessment of the results confirms the proposal that substantial improvement in sinyle EP response recoynition is achieved by this technique

    The application of genetic algorithms to the adaptation of IIR filters

    Get PDF
    The adaptation of an IIR filter is a very difficult problem due to its non-quadratic performance surface and potential instability. Conventional adaptive IIR algorithms suffer from potential instability problems and a high cost for stability monitoring. Therefore, there is much interest in adaptive IIR filters based on alternative algorithms. Genetic algorithms are a family of search algorithms based on natural selection and genetics. They have been successfully used in many different areas. Genetic algorithms applied to the adaptation of IIR filtering problems are studied in this thesis, and show that the genetic algorithm approach has a number of advantages over conventional gradient algorithms, particularly, for the adaptation of high order adaptive IIR filters, IIR filters with poles close to the unit circle and IIR filters with multi-modal error surfaces. The conventional gradient algorithms have difficulty solving these problems. Coefficient results are presented for various orders of IIR filters in this thesis. In the computer simulations presented in this thesis, the direct, cascade, parallel and lattice form IIR filter structures have been used and compared. The lattice form IIR filter structure shows its superiority over the cascade and parallel form IIR filter structures in terms of its mean square error convergence performance

    Hyper-parameter tuning and feature extraction for asynchronous action detection from sub-thalamic nucleus local field potentials

    Get PDF
    Introduction: Decoding brain states from subcortical local field potentials (LFPs) indicative of activities such as voluntary movement, tremor, or sleep stages, holds significant potential in treating neurodegenerative disorders and offers new paradigms in brain-computer interface (BCI). Identified states can serve as control signals in coupled human-machine systems, e.g., to regulate deep brain stimulation (DBS) therapy or control prosthetic limbs. However, the behavior, performance, and efficiency of LFP decoders depend on an array of design and calibration settings encapsulated into a single set of hyper-parameters. Although methods exist to tune hyper-parameters automatically, decoders are typically found through exhaustive trial-and-error, manual search, and intuitive experience. Methods: This study introduces a Bayesian optimization (BO) approach to hyper-parameter tuning, applicable through feature extraction, channel selection, classification, and stage transition stages of the entire decoding pipeline. The optimization method is compared with five real-time feature extraction methods paired with four classifiers to decode voluntary movement asynchronously based on LFPs recorded with DBS electrodes implanted in the subthalamic nucleus of Parkinson’s disease patients. Results: Detection performance, measured as the geometric mean between classifier specificity and sensitivity, is automatically optimized. BO demonstrates improved decoding performance from initial parameter setting across all methods. The best decoders achieve a maximum performance of 0.74 ± 0.06 (mean ± SD across all participants) sensitivity-specificity geometric mean. In addition, parameter relevance is determined using the BO surrogate models. Discussion: Hyper-parameters tend to be sub-optimally fixed across different users rather than individually adjusted or even specifically set for a decoding task. The relevance of each parameter to the optimization problem and comparisons between algorithms can also be difficult to track with the evolution of the decoding problem. We believe that the proposed decoding pipeline and BO approach is a promising solution to such challenges surrounding hyper-parameter tuning and that the study’s findings can inform future design iterations of neural decoders for adaptive DBS and BCI

    Convergence of adaptive morphological filters in the context of Markov chains

    Get PDF
    A typical parameterized r-opening *r is a filter defined as a union of openings by a collection of compact, convex structuring elements, each of which is governed by a parameter vector r. It reduces to a single parameter r-opening filter by a set of structuring elements when r is a scalar sizing parameter. The parameter vector is adjusted by a set of adaptation rules according to whether the re construction Ar derived from r correctly or incorrectly passes the signal and noise grains sampled from the image. Applied to the signal-union-noise model, the optimization problem is to find the vector of r that minimizes the Mean-Absolute-Error between the filtered and ideal image processes. The adaptive r-opening filter fits into the framework of Markov processes, the adaptive parameter being the state of the process. For a single parameter r-opening filter, we proved that there exists a stationary distribution governing the parameter in the steady state and convergence is characterized in terms of the steady-state distribution. Key filter properties such as parameter mean, parameter variance, and expected error in the steady state are characterized via the stationary distribution. Steady-state behavior is compared to the optimal solution for the uniform model, for which it is possible to derive a closed-form solution for the optimal filter. We also developed the Markov adaptation system for multiparameter opening filters and provided numerical solutions to some special cases. For multiparameter r-opening filters, various adaptive models derived from various assumptions on the form of the filter have been studied. Although the state-probability increment equations can be derived from the appropriate Chapman-Kolmogorov equations, the closed-form representation of steady-state distributions is mathematically problematic due to the support geometry of the boundary states and their transitions. Therefore, numerical methods are employed to approximate for steady state probability distributions. The technique developed for conventional opening filters is also applied to bandpass opening filters. In present thesis study, the concept of signal and noise pass sets plays a central role throughout the adaptive filter analysis. The pass set reduces to the granulometric measure (or {&r}-measure) of the signal and noise grain. Optimization and adaptation are characterized in terms of the distribution of the granulometric measures for single parameter filters, or in terms of the multivariate distribution of the signal and noise pass sets. By introducing these concepts, this thesis study also provides some optimal opening filter error equations. It has been shown in the case of the uniform distribution of single sizing parameter that there is a strong agreement between the adaptive filter and optimal filter based on analytic error minimization. This agreement has been also demonstrated in various r-opening filters. Furthermore, the probabilistic interpretation has a close connection to traditional linear adaptive filter theory. The method has been applied to the classical grain separation (clutter removal) problem. *See content for correct numerical representation

    Evolving Optimal IIR and Adaptive Filters

    Get PDF
    In this thesis, current digital filter design techniques are critically reviewed and problems associated with computational cost, complexity, frequency response and speed of convergence, identified. Based on this, a globally optimal, fine- tuned and efficient evolutionary hybrid technique has been developed to automate and optimise infinite impulse response (HR) and adaptive filter design. The proposed hybrid design approach employs an evolutionary algorithm (EA) as a global search tool and a least mean square (LMS) algorithm, whenever appropriate, as a fine-tuner. This permits optimal and real-time tracking of time varying changes in nonstationary environments as widely encountered in telecommunications. In the development, various improvements on existing algorithms are made, including those on components of EAs, LMS algorithm and the filter structures. The aims are to be able to evolve direct form HR structures using simple stability monitoring techniques, to improve local hue-tuning performance and to avoid premature convergence. To evolve complex phenotype chromosomes that are needed by complex HR. filters, a novel method of crossover operation is developed. This is a variation of the standard uniform crossover in which the split points are considered to combine uniquely as indivisible floating-point complex valued genes. The split-point crossover operation produces more new members than the standard crossover operation, and hence provides a faster rate of convergence and avoids premature convergence. The EAs have been particularly designed for small population sizes and to reduce premature convergence, a new operator is designed to introduce new members into the population during evolution. Two techniques are investigated in the design of linear adaptive HR digital filters, namely, the pole design method and the coefficient design method. The pole design method provides filter stability throughout the genetic search without requiring a variety of stability monitoring techniques. The coefficient design method uses simple stability guaranteeing techniques, which also improves the rate of convergence of the EAs. With the hybrid technique, complex-coefficient filters have been designed successfully and globally optimal and adaptive filters have been achieved. The developed methodologies and designs are verified using higher order complex HR systems and, for adaptation, inverse system modelling that is synonymous with channel equalising filters operating in multipath environments. Here adaptive complex parameters become possible to equalise amplitude and phase distortions of the received signals. Various stability-ensuring techniques are investigated extensively and their convergence performances are compared with the proposed method. The proposed hybrid, global and fine design technique is applied to solve adaptive channel equalisation and noise cancellation problems commonly existing in telecommunications
    • …
    corecore