9,128 research outputs found
A New Recursive Least-Squares Method with Multiple Forgetting Schemes
We propose a recursive least-squares method with multiple forgetting schemes
to track time-varying model parameters which change with different rates. Our
approach hinges on the reformulation of the classic recursive least-squares
with forgetting scheme as a regularized least squares problem. A simulation
study shows the effectiveness of the proposed method
Stochastic Gradient versus Recursive Least Squares Learning
In this paper we perform an in—depth investigation of relative merits of two adaptive learning algorithms with constant gain, Recursive Least Squares (RLS) and Stochastic Gradient (SG), using the Phelps model of monetary policy as a testing ground. The behavior of the two learning algorithms is very different. RLS is characterized by a very small region of attraction of the Self—Confirming Equilibrium (SCE) under the mean, or averaged, dynamics, and “escapesâ€, or large distance movements of perceived model parameters from their SCE values. On the other hand, the SCE is stable under the SG mean dynamics in a large region. However, actual behavior of the SG learning algorithm is divergent for a wide range of constant gain parameters, including those that could be justified as economically meaningful. We explain the discrepancy by looking into the structure of eigenvalues and eigenvectors of the mean dynamics map under the SG learning. As a result of our paper, we express a warning regarding the behavior of constant gain learning algorithm in real time. If many eigenvalues of the mean dynamics map are close to the unit circle, Stochastic Recursive Algorithm which describes the actual dynamics under learning might exhibit divergent behavior despite convergent mean dynamics.constant gain adaptive learning, E—stability, recursive least squares, stochastic gradient learning
Distributed Recursive Least-Squares: Stability and Performance Analysis
The recursive least-squares (RLS) algorithm has well-documented merits for
reducing complexity and storage requirements, when it comes to online
estimation of stationary signals as well as for tracking slowly-varying
nonstationary processes. In this paper, a distributed recursive least-squares
(D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless
sensor networks. Distributed iterations are obtained by minimizing a separable
reformulation of the exponentially-weighted least-squares cost, using the
alternating-minimization algorithm. Sensors carry out reduced-complexity tasks
locally, and exchange messages with one-hop neighbors to consent on the
network-wide estimates adaptively. A steady-state mean-square error (MSE)
performance analysis of D-RLS is conducted, by studying a stochastically-driven
`averaged' system that approximates the D-RLS dynamics asymptotically in time.
For sensor observations that are linearly related to the time-invariant
parameter vector sought, the simplifying independence setting assumptions
facilitate deriving accurate closed-form expressions for the MSE steady-state
values. The problems of mean- and MSE-sense stability of D-RLS are also
investigated, and easily-checkable sufficient conditions are derived under
which a steady-state is attained. Without resorting to diminishing step-sizes
which compromise the tracking ability of D-RLS, stability ensures that per
sensor estimates hover inside a ball of finite radius centered at the true
parameter vector, with high-probability, even when inter-sensor communication
links are noisy. Interestingly, computer simulations demonstrate that the
theoretical findings are accurate also in the pragmatic settings whereby
sensors acquire temporally-correlated data.Comment: 30 pages, 4 figures, submitted to IEEE Transactions on Signal
Processin
Round-off error propagation in four generally applicable, recursive, least-squares-estimation schemes
The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well
Recommended from our members
Zero attracting recursive least squares algorithms
The l1-norm sparsity constraint is a widely used
technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the
system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach
Block row recursive least squares migration
Recursive estimates of large systems of equations in the context of least
squares fitting is a common practice in different fields of study. For example,
recursive adaptive filtering is extensively used in signal processing and
control applications. The necessity of solving least squares problem via
recursive algorithms comes from the need of fast real-time signal processing
strategies. Computational cost of using least squares algorithm could also
limits the applicability of this technique in geophysical problems. In this
paper, we consider recursive least squares solution for wave equation least
squares migration with sliding windows involving several rank K downdating and
updating computations. This technique can be applied for dynamic and stationary
processes. One can show that in the case of stationary processes, the spectrum
of the preconditioned system is clustered around one and the method will
converge superlinearly with probability one, if we use enough data in each
windowed setup. Numerical experiments are reported in order to illustrate the
effectiveness of the technique for least squares migration.Comment: CSPG CSEG CWLS Conventio
Completely Recursive Least Squares and Its Applications
The matrix-inversion-lemma based recursive least squares (RLS) approach is of a recursive form and free of matrix inversion, and has excellent performance regarding computation and memory in solving the classic least-squares (LS) problem. It is important to generalize RLS for generalized LS (GLS) problem. It is also of value to develop an efficient initialization for any RLS algorithm.
In Chapter 2, we develop a unified RLS procedure to solve the unconstrained/linear-equality (LE) constrained GLS. We also show that the LE constraint is in essence a set of special error-free observations and further consider the GLS with implicit LE constraint in observations (ILE-constrained GLS).
Chapter 3 treats the RLS initialization-related issues, including rank check, a convenient method to compute the involved matrix inverse/pseudoinverse, and resolution of underdetermined systems. Based on auxiliary-observations, the RLS recursion can start from the first real observation and possible LE constraints are also imposed recursively. The rank of the system is checked implicitly. If the rank is deficient, a set of refined non-redundant observations is determined alternatively.
In Chapter 4, base on [Li07], we show that the linear minimum mean square error (LMMSE) estimator, as well as the optimal Kalman filter (KF) considering various correlations, can be calculated from solving an equivalent GLS using the unified RLS.
In Chapters 5 & 6, an approach of joint state-and-parameter estimation (JSPE) in power system monitored by synchrophasors is adopted, where the original nonlinear parameter problem is reformulated as two loosely-coupled linear subproblems: state tracking and parameter tracking. Chapter 5 deals with the state tracking which determines the voltages in JSPE, where dynamic behavior of voltages under possible abrupt changes is studied. Chapter 6 focuses on the subproblem of parameter tracking in JSPE, where a new prediction model for parameters with moving means is introduced. Adaptive filters are developed for the above two subproblems, respectively, and both filters are based on the optimal KF accounting for various correlations. Simulations indicate that the proposed approach yields accurate parameter estimates and improves the accuracy of the state estimation, compared with existing methods
- …