3,204 research outputs found
An affine combination of two LMS adaptive filters - Transient mean-square analysis
This paper studies the statistical behavior of an affine combination of the outputs of two LMS adaptive filters that simultaneously adapt using the same white Gaussian inputs. The purpose of the combination is to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD). The linear combination studied is a generalization of the convex combination, in which the combination factor is restricted to the interval . The viewpoint is taken that each of the two filters produces dependent estimates of the unknown channel. Thus, there exists a sequence of optimal affine combining coefficients which minimizes the MSE. First, the optimal unrealizable affine combiner is studied and provides the best possible performance for this class. Then two new schemes are proposed for practical applications. The mean-square performances are analyzed and validated by Monte Carlo simulations. With proper design, the two practical schemes yield an overall MSD that is usually less than the MSD's of either filter
Stochastic analysis of an error power ratio scheme applied to the affine combination of two LMS adaptive filters
The affine combination of two adaptive filters that simultaneously adapt on the same inputs has been actively investigated. In these structures, the filter outputs are linearly combined to yield a performance that is better than that of either filter. Various decision rules can be used to determine the time-varying parameter for combining the filter outputs. A recently proposed scheme based on the ratio of error powers of the two filters has been shown by simulation to achieve nearly optimum performance. The purpose of this paper is to present a first analysis of the statistical behavior of this error power scheme for white Gaussian inputs. Expressions are derived for the mean behavior of the combination parameter and for the adaptive weight mean-square deviation. Monte Carlo simulations show good to excellent agreement with the theoretical predictions
A stochastic behavior analysis of stochastic restricted-gradient descent algorithm in reproducing kernel Hilbert spaces
This paper presents a stochastic behavior analysis of a kernel-based
stochastic restricted-gradient descent method. The restricted gradient gives a
steepest ascent direction within the so-called dictionary subspace. The
analysis provides the transient and steady state performance in the mean
squared error criterion. It also includes stability conditions in the mean and
mean-square sense. The present study is based on the analysis of the kernel
normalized least mean square (KNLMS) algorithm initially proposed by Chen et
al. Simulation results validate the analysis
Stochastic Behavior Analysis of the Gaussian Kernel Least-Mean-Square Algorithm
The kernel least-mean-square (KLMS) algorithm is a popular algorithm in nonlinear adaptive filtering due to its
simplicity and robustness. In kernel adaptive filters, the statistics of the input to the linear filter depends on the parameters of the kernel employed. Moreover, practical implementations require a finite nonlinearity model order. A Gaussian KLMS has two design parameters, the step size and the Gaussian kernel bandwidth. Thus, its design requires analytical models for the algorithm behavior as a function of these two parameters. This paper studies the steady-state behavior and the transient behavior of the
Gaussian KLMS algorithm for Gaussian inputs and a finite order nonlinearity model. In particular, we derive recursive expressions for the mean-weight-error vector and the mean-square-error. The model predictions show excellent agreement with Monte Carlo simulations in transient and steady state. This allows the explicit analytical determination of stability limits, and gives opportunity
to choose the algorithm parameters a priori in order to achieve prescribed convergence speed and quality of the estimate. Design examples are presented which validate the theoretical analysis and illustrates its application
The ROMES method for statistical modeling of reduced-order-model error
This work presents a technique for statistically modeling errors introduced
by reduced-order models. The method employs Gaussian-process regression to
construct a mapping from a small number of computationally inexpensive `error
indicators' to a distribution over the true error. The variance of this
distribution can be interpreted as the (epistemic) uncertainty introduced by
the reduced-order model. To model normed errors, the method employs existing
rigorous error bounds and residual norms as indicators; numerical experiments
show that the method leads to a near-optimal expected effectivity in contrast
to typical error bounds. To model errors in general outputs, the method uses
dual-weighted residuals---which are amenable to uncertainty control---as
indicators. Experiments illustrate that correcting the reduced-order-model
output with this surrogate can improve prediction accuracy by an order of
magnitude; this contrasts with existing `multifidelity correction' approaches,
which often fail for reduced-order models and suffer from the curse of
dimensionality. The proposed error surrogates also lead to a notion of
`probabilistic rigor', i.e., the surrogate bounds the error with specified
probability
- …