553 research outputs found

    On feed-through terms in the lms algorithm

    Get PDF
    The well known least mean squares (LMS) algorithm is studied as a control system. When applied in a noise canceller a block diagram approach is used to show that the step size has two upper limits. One is the conventional limit beyond which instability results. The second limit shows that if the step size is chosen to be too large then feed-through terms consisting of signal times noise will result in an additive term at the noise canceller output. This second limit is smaller than the first and will cause distortion at the noise canceller output

    The trapezoidal method of steepest-descent and its application to adaptive filtering

    Get PDF
    The method of steepest-descent is re-visited in continuous time. It is shown that the continuous time version is a vector differential equation the solution of which is found by integration. Since numerical integration has many forms, we show an alternative to the conventional solution by using a Trapezoidal integration solution. This in turn gives a slightly modified least-mean squares (LMS) algorithm

    Optical Magnetometer Employing Adaptive Noise Cancellation for Unshielded Magnetocardiography

    Get PDF
    This paper demonstrates the concept of an optical magnetometer for magnetocardiography. The magnetometer employs a standard Least-Mean-Squares (LMS) algorithm for heart magnetic field measurement within unshielded environment. Experimental results show that the algorithm can extract a weak heart signal from a much-stronger magnetic noise and detect the P, QRS, and T heart features and completely suppress the common power line noise component at 50 Hz

    An Adaptive Block-Based Eigenvector Equalization for Time-Varying Multipath Fading Channels

    Get PDF
    In this paper we present an adaptive Block-Based EigenVector Algorithm (BBEVA) for blind equalization of time-varying multipath fading channels. In addition we assess the performance of the new algorithm for different configurations and compare the results with the least mean squares (LMS) algorithm. The new algorithm is evaluated in terms of intersymbol interference (ISI) suppression, mean squared error (MSE) and by examining the signal constellation at the output of the equalizer. Simulation results show that the BBEVA performs better than the non-blind LMS algorithm

    On-chip compensation of device-mismatch effects in analog VLSI neural networks

    Get PDF
    Device mismatch in VLSI degrades the accuracy of analog arithmetic circuits and lowers the learning performance of large-scale neural networks implemented in this technology. We show compact, low-power on-chip calibration techniques that compensate for device mismatch. Our techniques enable large-scale analog VLSI neural networks with learning performance on the order of 10 bits. We demonstrate our techniques on a 64-synapse linear perceptron learning with the Least-Mean-Squares (LMS) algorithm, and fabricated in a 0.35µm CMOS process.

    Low-complexity RLS algorithms using dichotomous coordinate descent iterations

    Get PDF
    In this paper, we derive low-complexity recursive least squares (RLS) adaptive filtering algorithms. We express the RLS problem in terms of auxiliary normal equations with respect to increments of the filter weights and apply this approach to the exponentially weighted and sliding window cases to derive new RLS techniques. For solving the auxiliary equations, line search methods are used. We first consider conjugate gradient iterations with a complexity of O(N-2) operations per sample; N being the number of the filter weights. To reduce the complexity and make the algorithms more suitable for finite precision implementation, we propose a new dichotomous coordinate descent (DCD) algorithm and apply it to the auxiliary equations. This results in a transversal RLS adaptive filter with complexity as low as 3N multiplications per sample, which is only slightly higher than the complexity of the least mean squares (LMS) algorithm (2N multiplications). Simulations are used to compare the performance of the proposed algorithms against the classical RLS and known advanced adaptive algorithms. Fixed-point FPGA implementation of the proposed DCD-based RLS algorithm is also discussed and results of such implementation are presented
    corecore