94,367 research outputs found
Distributed Recursive Least-Squares: Stability and Performance Analysis
The recursive least-squares (RLS) algorithm has well-documented merits for
reducing complexity and storage requirements, when it comes to online
estimation of stationary signals as well as for tracking slowly-varying
nonstationary processes. In this paper, a distributed recursive least-squares
(D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless
sensor networks. Distributed iterations are obtained by minimizing a separable
reformulation of the exponentially-weighted least-squares cost, using the
alternating-minimization algorithm. Sensors carry out reduced-complexity tasks
locally, and exchange messages with one-hop neighbors to consent on the
network-wide estimates adaptively. A steady-state mean-square error (MSE)
performance analysis of D-RLS is conducted, by studying a stochastically-driven
`averaged' system that approximates the D-RLS dynamics asymptotically in time.
For sensor observations that are linearly related to the time-invariant
parameter vector sought, the simplifying independence setting assumptions
facilitate deriving accurate closed-form expressions for the MSE steady-state
values. The problems of mean- and MSE-sense stability of D-RLS are also
investigated, and easily-checkable sufficient conditions are derived under
which a steady-state is attained. Without resorting to diminishing step-sizes
which compromise the tracking ability of D-RLS, stability ensures that per
sensor estimates hover inside a ball of finite radius centered at the true
parameter vector, with high-probability, even when inter-sensor communication
links are noisy. Interestingly, computer simulations demonstrate that the
theoretical findings are accurate also in the pragmatic settings whereby
sensors acquire temporally-correlated data.Comment: 30 pages, 4 figures, submitted to IEEE Transactions on Signal
Processin
Round-off error propagation in four generally applicable, recursive, least-squares-estimation schemes
The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well
A New Recursive Least-Squares Method with Multiple Forgetting Schemes
We propose a recursive least-squares method with multiple forgetting schemes
to track time-varying model parameters which change with different rates. Our
approach hinges on the reformulation of the classic recursive least-squares
with forgetting scheme as a regularized least squares problem. A simulation
study shows the effectiveness of the proposed method
Stochastic Gradient versus Recursive Least Squares Learning
In this paper we perform an in—depth investigation of relative merits of two adaptive learning algorithms with constant gain, Recursive Least Squares (RLS) and Stochastic Gradient (SG), using the Phelps model of monetary policy as a testing ground. The behavior of the two learning algorithms is very different. RLS is characterized by a very small region of attraction of the Self—Confirming Equilibrium (SCE) under the mean, or averaged, dynamics, and “escapesâ€, or large distance movements of perceived model parameters from their SCE values. On the other hand, the SCE is stable under the SG mean dynamics in a large region. However, actual behavior of the SG learning algorithm is divergent for a wide range of constant gain parameters, including those that could be justified as economically meaningful. We explain the discrepancy by looking into the structure of eigenvalues and eigenvectors of the mean dynamics map under the SG learning. As a result of our paper, we express a warning regarding the behavior of constant gain learning algorithm in real time. If many eigenvalues of the mean dynamics map are close to the unit circle, Stochastic Recursive Algorithm which describes the actual dynamics under learning might exhibit divergent behavior despite convergent mean dynamics.constant gain adaptive learning, E—stability, recursive least squares, stochastic gradient learning
Recommended from our members
Zero attracting recursive least squares algorithms
The l1-norm sparsity constraint is a widely used
technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the
system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach
Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics
This paper focuses on the problem of recursive nonlinear least squares
parameter estimation in multi-agent networks, in which the individual agents
observe sequentially over time an independent and identically distributed
(i.i.d.) time-series consisting of a nonlinear function of the true but unknown
parameter corrupted by noise. A distributed recursive estimator of the
\emph{consensus} + \emph{innovations} type, namely , is
proposed, in which the agents update their parameter estimates at each
observation sampling epoch in a collaborative way by simultaneously processing
the latest locally sensed information~(\emph{innovations}) and the parameter
estimates from other agents~(\emph{consensus}) in the local neighborhood
conforming to a pre-specified inter-agent communication topology. Under rather
weak conditions on the connectivity of the inter-agent communication and a
\emph{global observability} criterion, it is shown that at every network agent,
the proposed algorithm leads to consistent parameter estimates. Furthermore,
under standard smoothness assumptions on the local observation functions, the
distributed estimator is shown to yield order-optimal convergence rates, i.e.,
as far as the order of pathwise convergence is concerned, the local parameter
estimates at each agent are as good as the optimal centralized nonlinear least
squares estimator which would require access to all the observations across all
the agents at all times. In order to benchmark the performance of the proposed
distributed estimator with that of the centralized nonlinear
least squares estimator, the asymptotic normality of the estimate sequence is
established and the asymptotic covariance of the distributed estimator is
evaluated. Finally, simulation results are presented which illustrate and
verify the analytical findings.Comment: 28 pages. Initial Submission: Feb. 2016, Revised: July 2016,
Accepted: September 2016, To appear in IEEE Transactions on Signal and
Information Processing over Networks: Special Issue on Inference and Learning
over Network
On the stability of recursive least squares in the Gauss-Markov model
This exercice provides all eigenvalues and eigenvectors of the autoregressive matrix found in classical recursive least square theory.Linear Regression Model, Recursive Least Squares
Recursive least squares for online dynamic identification on gas turbine engines
Online identification for a gas turbine engine is vital for health
monitoring and control decisions because the engine electronic
control system uses the identified model to analyze the performance
for optimization of fuel consumption, a response to the pilot
command, as well as engine life protection. Since a gas turbine engine
is a complex system and operating at variant working conditions, it
behaves nonlinearly through different power transition levels and at
different operating points. An adaptive approach is required to capture
the dynamics of its performance
- …
