5,064 research outputs found
Fixed-point error analysis of stochastic gradient adaptive lattice filters
Journal ArticleAbstract-This paper presents a theoretical analysis of the stochastic gradient adaptive lattice filter used as a linear, one-step predictor, when the effects of finite precision arithmetic are taken into account. Only the fixed-point implementation is considered here. Both the unnormalized and normalized adaptation algorithms are analyzed. Expressions for the steady-state mean-squared values of the accumulated numerical errors in the computation of the reflection coefficients and the prediction errors of different orders have been developed. The results show that the dominant term in the expressions for the mean-squared values of the numerical errors is inversely proportional to the convergence parameter. Furthermore, they indicate that the quantization errors associated with the reflection coefficients are more critical than those associated with representing the prediction error sequences. Another interesting result is that signals with high correlation among samples produce larger numerical errors in the adaptive lattice filter than signals with low correlation among samples. We present several simulation examples that show close agreement with the theoretical results. We also present some comparisons between the numerical behavior of the lattice and transversal stochastic gradient adaptive filters. The numerical results support the general belief that the gradient adaptive lattice filters have better numerical properties than their transversal counterparts, even though it is conceivable that the lattice filters can produce larger numerical errors than the transversal filters under some circumstances
Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks
Bilateral filters have wide spread use due to their edge-preserving
properties. The common use case is to manually choose a parametric filter type,
usually a Gaussian filter. In this paper, we will generalize the
parametrization and in particular derive a gradient descent algorithm so the
filter parameters can be learned from data. This derivation allows to learn
high dimensional linear filters that operate in sparsely populated feature
spaces. We build on the permutohedral lattice construction for efficient
filtering. The ability to learn more general forms of high-dimensional filters
can be used in several diverse applications. First, we demonstrate the use in
applications where single filter applications are desired for runtime reasons.
Further, we show how this algorithm can be used to learn the pairwise
potentials in densely connected conditional random fields and apply these to
different image segmentation tasks. Finally, we introduce layers of bilateral
filters in CNNs and propose bilateral neural networks for the use of
high-dimensional sparse data. This view provides new ways to encode model
structure into network architectures. A diverse set of experiments empirically
validates the usage of general forms of filters
On line power spectra identification and whitening for the noise in interferometric gravitational wave detectors
In this paper we address both to the problem of identifying the noise Power
Spectral Density of interferometric detectors by parametric techniques and to
the problem of the whitening procedure of the sequence of data. We will
concentrate the study on a Power Spectral Density like the one of the
Italian-French detector VIRGO and we show that with a reasonable finite number
of parameters we succeed in modeling a spectrum like the theoretical one of
VIRGO, reproducing all its features. We propose also the use of adaptive
techniques to identify and to whiten on line the data of interferometric
detectors. We analyze the behavior of the adaptive techniques in the field of
stochastic gradient and in the
Least Squares ones.Comment: 28 pages, 21 figures, uses iopart.cls accepted for pubblication on
Classical and Quantum Gravit
Underdetermined-order recursive least-squares adaptive filtering: The concept and algorithms
Published versio
On adaptive filter structure and performance
SIGLEAvailable from British Library Document Supply Centre- DSC:D75686/87 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Learning algorithms for adaptive digital filtering
In this thesis, we consider the problem of parameter optimisation in adaptive digital filtering. Adaptive digital filtering can be accomplished using both Finite Impulse Response (FIR) filters and Infinite Impulse Response Filters (IIR) filters. Adaptive FIR filtering algorithms are well established. However, the potential computational advantages of IIR filters has led to an increase in research on adaptive IIR filtering algorithms. These algorithms are studied in detail in this thesis and the limitations of current adaptive IIR filtering algorithms are identified. New approaches to adaptive IIR filtering using intelligent learning algorithms are proposed. These include Stochastic Learning Automata, Evolutionary Algorithms and Annealing Algorithms. Each of these techniques are used for the filtering problem and simulation results are presented showing the performance of the algorithms for adaptive IIR filtering. The relative merits and demerits of the different schemes are discussed. Two practical applications of adaptive IIR filtering are simulated and results of using the new adaptive strategies are presented. Other than the new approaches used, two new hybrid schemes are proposed based on concepts from genetic algorithms and annealing. It is shown with the help of simulation studies, that these hybrid schemes provide a superior performance to the exclusive use of any one scheme
Low-complexity RLS algorithms using dichotomous coordinate descent iterations
In this paper, we derive low-complexity recursive least squares (RLS) adaptive filtering algorithms. We express the RLS problem in terms of auxiliary normal equations with respect to increments of the filter weights and apply this approach to the exponentially weighted and sliding window cases to derive new RLS techniques. For solving the auxiliary equations, line search methods are used. We first consider conjugate gradient iterations with a complexity of O(N-2) operations per sample; N being the number of the filter weights. To reduce the complexity and make the algorithms more suitable for finite precision implementation, we propose a new dichotomous coordinate descent (DCD) algorithm and apply it to the auxiliary equations. This results in a transversal RLS adaptive filter with complexity as low as 3N multiplications per sample, which is only slightly higher than the complexity of the least mean squares (LMS) algorithm (2N multiplications). Simulations are used to compare the performance of the proposed algorithms against the classical RLS and known advanced adaptive algorithms. Fixed-point FPGA implementation of the proposed DCD-based RLS algorithm is also discussed and results of such implementation are presented
- …