54 research outputs found

    Analysis and Evaluation of the Family of Sign Adaptive Algorithms

    Get PDF
    In this thesis, four novel sign adaptive algorithms proposed by the author were analyzed and evaluated for floating-point arithmetic operations. These four algorithms include Sign Regressor Least Mean Fourth (SRLMF), Sign Regressor Least Mean Mixed-Norm (SRLMMN), Normalized Sign Regressor Least Mean Fourth (NSRLMF), and Normalized Sign Regressor Least Mean Mixed-Norm (NSRLMMN). The performance of the latter three algorithms has been analyzed and evaluated for real-valued data only. While the performance of the SRLMF algorithm has been analyzed and evaluated for both cases of real- and complex-valued data. Additionally, four sign adaptive algorithms proposed by other researchers were also analyzed and evaluated for floating-point arithmetic operations. These four algorithms include Sign Regressor Least Mean Square (SRLMS), Sign-Sign Least Mean Square (SSLMS), Normalized Sign-Error Least Mean Square (NSLMS), and Normalized Sign Regressor Least Mean Square (NSRLMS). The performance of the latter three algorithms has been analyzed and evaluated for both cases of real- and complex-valued data. While the performance of the SRLMS algorithm has been analyzed and evaluated for complex-valued data only. The framework employed in this thesis relies on energy conservation approach. The energy conservation framework has been applied uniformly for the evaluation of the performance of the aforementioned eight sign adaptive algorithms proposed by the author and other researchers. In other words, the energy conservation framework stands out as a common theme that runs throughout the treatment of the performance of the aforementioned eight algorithms. Some of the results from the performance evaluation of the four novel sign adaptive algorithms proposed by the author, namely SRLMF, SRLMMN, NSRLMF, and NSRLMMN are as follows. It was shown that the convergence performance of the SRLMF and SRLMMN algorithms for real-valued data was similar to those of the Least Mean Fourth (LMF) and Least Mean Mixed-Norm (LMMN) algorithms, respectively. Moreover, it was also shown that the NSRLMF and NSRLMMN algorithms exhibit a compromised convergence performance for realvalued data as compared to the Normalized Least Mean Fourth (NLMF) and Normalized Least Mean Mixed-Norm (NLMMN) algorithms, respectively. Some misconceptions among biomedical signal processing researchers concerning the implementation of adaptive noise cancelers using the Sign-Error Least Mean Fourth (SLMF), Sign-Sign Least Mean Fourth (SSLMF), and their variant algorithms were also removed. Finally, three of the novel sign adaptive algorithms proposed by the author, namely SRLMF, SRLMMN, and NSRLMF have been successfully employed by other researchers and the author in applications ranging from power quality improvement in the distribution system and multiple artifacts removal from various physiological signals such as ElectroCardioGram (ECG) and ElectroEncephaloGram (EEG)

    Effect of Input Correlation on (Normalized) Adaptive Filters

    Get PDF

    Investigations on efficient adaptation algorithms

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and Institute of Engineering and Sciences, Bilkent Univ., 1995.Thesis (Master's) -- Bilkent University, 1995.Includes bibliographical references leaves 71-75.Efficient adaptation algorithms, which are intended to improve the performances of the LMS and the RLS algorithms are introduced. It is shown that nonlinear transformations of the input and the desired signals by a softlimiter improve the convergence speed of the LMS algorithm at no cost, with a small bias in the optimal filter coefficients. Also, the new algorithm can be used to filter a-stable non-Gaussian processes for which the conventional adaptive algorithms are useless. In a second approach, a prewhitening filter is used to increase the convergence speed of the LMS algorithm. It is shown that prewhitening does not change the relation between the input and the desired signals provided that the relation is a linear one. A low order adaptive prewhitening filter can provide significant speed up in the convergence. Finally, adaptive filtering algorithms running on roughly quantized signals are proposed to decrease the number of multiplications in the LMS and the RLS algorithms. Although, they require significantly less computations their preformances are comparable to those of the conventional LMS and RLS algorithms.Belge, MuratM.S

    Adaptive filtering algorithms for quaternion-valued signals

    Get PDF
    Advances in sensor technology have made possible the recoding of three and four-dimensional signals which afford a better representation of our actual three-dimensional world than the ``flat view'' one and two-dimensional approaches. Although it is straightforward to model such signals as real-valued vectors, many applications require unambiguous modeling of orientation and rotation, where the division algebra of quaternions provides crucial advantages over real-valued vector approaches. The focus of this thesis is on the use of recent advances in quaternion-valued signal processing, such as the quaternion augmented statistics, widely-linear modeling, and the HR-calculus, in order to develop practical adaptive signal processing algorithms in the quaternion domain which deal with the notion of phase and frequency in a compact and physically meaningful way. To this end, first a real-time tracker of quaternion impropriety is developed, which allows for choosing between strictly linear and widely-linear quaternion-valued signal processing algorithms in real-time, in order to reduce computational complexity where appropriate. This is followed by the strictly linear and widely-linear quaternion least mean phase algorithms that are developed for phase-only estimation in the quaternion domain, which is accompanied by both quantitative performance assessment and physical interpretation of operations. Next, the practical application of state space modeling of three-phase power signals in smart grid management and control systems is considered, and a robust complex-valued state space model for frequency estimation in three-phase systems is presented. Its advantages over other available estimators are demonstrated both in an analytical sense and through simulations. The concept is then expanded to the quaternion setting in order to make possible the simultaneous estimation of the system frequency and its voltage phasors. Furthermore, a distributed quaternion Kalman filtering algorithm is developed for frequency estimation over power distribution networks and collaborative target tracking. Finally, statistics of stable quaternion-valued random variables, that include quaternion-valued Gaussian random variables as a special case, is investigated in order to develop a framework for the modeling and processing of heavy-tailed quaternion-valued signals.Open Acces

    Estimation and tracking of rapidly time-varying broadband acoustic communication channels

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2006This thesis develops methods for estimating wideband shallow-water acoustic communication channels. The very shallow water wideband channel has three distinct features: large dimension caused by extensive delay spread; limited number of degrees of freedom (DOF) due to resolvable paths and inter-path correlations; and rapid fluctuations induced by scattering from the moving sea surface. Traditional LS estimation techniques often fail to reconcile the rapid fluctuations with the large dimensionality. Subspace based approaches with DOF reduction are confronted with unstable subspace structure subject to significant changes over a short period of time. Based on state-space channel modeling, the first part of this thesis develops algorithms that jointly estimate the channel as well as its dynamics. Algorithms based on the Extended Kalman Filter (EKF) and the Expectation Maximization (EM) approach respectively are developed. Analysis shows conceptual parallels, including an identical second-order innovation form shared by the EKF modification and the suboptimal EM, and the shared issue of parameter identifiability due to channel structure, reflected as parameter unobservability in EKF and insufficient excitation in EM. Modifications of both algorithms, including a two-model based EKF and a subspace EM algorithm which selectively track dominant taps and reduce prediction error, are proposed to overcome the identifiability issue. The second part of the thesis develops algorithms that explicitly find the sparse estimate of the delay-Doppler spread function. The study contributes to a better understanding of the channel physical constraints on algorithm design and potential performance improvement. It may also be generalized to other applications where dimensionality and variability collide.Financial support for this thesis research was provided by the Office of Naval Research and the WHOI Academic Program Office

    Design of Neural Network Filters

    Get PDF
    Emnet for n rv rende licentiatafhandling er design af neurale netv rks ltre. Filtre baseret pa neurale netv rk kan ses som udvidelser af det klassiske line re adaptive l-ter rettet mod modellering af uline re sammenh nge. Hovedv gten l gges pa en neural netv rks implementering af den ikke-rekursive, uline re adaptive model med additiv st j. Formalet er at klarl gge en r kke faser forbundet med design af neural netv rks arkitekturer med henblik pa at udf re forskellige \black-box " modellerings opgaver sa som: System identi kation, invers modellering og pr diktion af tidsserier. De v senligste bidrag omfatter: Formulering af en neural netv rks baseret kanonisk lter repr sentation, der danner baggrund for udvikling af et arkitektur klassi kationssystem. I hovedsagen drejer det sig om en skelnen mellem globale og lokale modeller. Dette leder til at en r kke kendte neurale netv rks arkitekturer kan klassi ceres, og yderligere abnes der mulighed for udvikling af helt nye strukturer. I denne sammenh ng ndes en gennemgang af en r kke velkendte arkitekturer. I s rdeleshed l gges der v gt pa behandlingen af multi-lags perceptron neural netv rket

    Finite precision deep learning with theoretical guarantees

    Get PDF
    Recent successes of deep learning have been achieved at the expense of a very high computational and parameter complexity. Today, deployment of both inference and training of deep neural networks (DNNs) is predominantly in the cloud. A recent alternative trend is to deploy DNNs onto untethered, resource-constrained platforms at the Edge. To realize on-device intelligence, the gap between algorithmic requirements and available resources needs to be closed. One popular way of doing so is via implementation in finite precision. While ad-hoc trial and error techniques in finite precision deep learning abound, theoretical guarantees on network accuracy are elusive. The work presented in this dissertation builds a theoretical framework for the implementation of deep learning in finite precision. For inference, we theoretically analyze the worst-case accuracy drop in the presence of weight and activation quantization. Furthermore, we derive an optimal clipping criterion (OCC) to minimize the precision of dot-product outputs. For implementations using in-memory computing, OCC lowers ADC precision requirements. We analyze fixed-point training and present a methodology for implementing quantized back-propagation with close-to-minimal per-tensor precision. Finally, we study accumulator precision for reduced precision floating-point training using variance analysis techniques. We first introduce our work on fixed-point inference with accuracy guarantees. Theoretical bounds on the mismatch between limited and full precision networks are derived. Proper precision assignment can be readily obtained employing these bounds, and weight-activation, as well as per-layer precision trade-offs, are derived. Applied to a variety of networks and datasets, the presented analysis is found to be tight to within 2 bit. Furthermore, it is shown that a minimum precision network can have up to 3.5×\sim3.5\times lower hardware complexity than a binarized network at iso-accuracy. In general, a minimum precision network can reduce complexity by up to 10×\sim10\times compared to a full precision baseline while maintaining accuracy. Per-layer precision analysis indicates that precision requirements of common networks vary from 2 bit to 10 bit to guarantee an accuracy close to the floating-point baseline. Then, we study DNN implementation using in-memory computing (IMC), where we propose OCC to minimize the column ADC precision. The signal-to-quantization-noise ratio (SQNR) of OCC is shown to be within 0.8 dB of the well-known optimal Lloyd-Max quantizer. OCC improves the SQNR of the commonly employed full range quantizer by 14 dB which translates to a 3 bit ADC precision reduction. The input-serial weight-parallel (ISWP) IMC architecture is studied. Using bit-slicing techniques, significant energy savings can be achieved with minimal accuracy lost. Indeed, we prove that a dot-product can be realized with single memory access while suffering no more than 2 dB SQNR drop. Combining the proposed OCC and ISWP noise analysis with our proposed DNN precision analysis, we demonstrate 6×\sim6\times reduction of energy consumption in DNN implementation at iso-accuracy. Furthermore, we study the quantization of the back-propagation training algorithm. We propose a systematic methodology to obtain close-to-minimal per-layer precision requirements for the guaranteed statistical similarity between fixed-point and floating-point training. The challenges of quantization noise, inter-layer and intra-layer precision trade-offs, dynamic range, and stability are jointly addressed. Applied to several benchmarks, fixed-point training is demonstrated to achieve high fidelity to the baseline with an accuracy drop no greater than 0.56\%. The derived precision assignment is shown to be within 1 bit per tensor of the minimum. The methodology is found to reduce representational, computational, and communication costs of training by up to 6×6\times, 8×8\times, and 4×4\times, respectively, compared to the baseline and related works. Finally, we address the problem of reduced precision floating-point training. In particular, we study accumulation precision requirements. We present the variance retention ratio (VRR), an analytical metric measuring the suitability of accumulation mantissa precision. The analysis expands on concepts employed in variance engineering for weight initialization. An analytical expression for the VRR is derived and used to determine accumulation bit-width for precise tailoring of computation hardware. The VRR also quantifies the benefits of effective summation reduction techniques such as chunked accumulation and sparsification. Experimentally, the validity and tightness of our analysis are verified across multiple deep learning benchmarks
    corecore