research

Stochastic Gradient versus Recursive Least Squares Learning

Abstract

In this paper we perform an in—depth investigation of relative merits of two adaptive learning algorithms with constant gain, Recursive Least Squares (RLS) and Stochastic Gradient (SG), using the Phelps model of monetary policy as a testing ground. The behavior of the two learning algorithms is very different. RLS is characterized by a very small region of attraction of the Self—Confirming Equilibrium (SCE) under the mean, or averaged, dynamics, and “escapesâ€, or large distance movements of perceived model parameters from their SCE values. On the other hand, the SCE is stable under the SG mean dynamics in a large region. However, actual behavior of the SG learning algorithm is divergent for a wide range of constant gain parameters, including those that could be justified as economically meaningful. We explain the discrepancy by looking into the structure of eigenvalues and eigenvectors of the mean dynamics map under the SG learning. As a result of our paper, we express a warning regarding the behavior of constant gain learning algorithm in real time. If many eigenvalues of the mean dynamics map are close to the unit circle, Stochastic Recursive Algorithm which describes the actual dynamics under learning might exhibit divergent behavior despite convergent mean dynamics.constant gain adaptive learning, E—stability, recursive least squares, stochastic gradient learning

    Similar works