122,451 research outputs found

    Assessing GMM Estimates of the Federal Reserve Reaction Function

    Get PDF
    Estimating a forward-looking monetary policy rule by the Generalized Method of Moments (GMM) has become a popular approach since the influential paper by Clarida, Gali, and Gertler (1998). However , an abundant econometric literature underlines to the unappealing small- samples properties of GMM estimators. Focusing on the Federal Reserve reaction function, we assess GMM estimates in the context of monetary policy rules. First, we show that three usual alternative GMM estimators yield substantially different results. Then, we compare the GMM estimates with two Maximum-Likelihood (ML) estimates, obtained using a small model of the economy. We use Monte-Carlo simulations to investigate the empirical results. We find that the GMM are biased in small sample, inducing an overestimate of the inflation parameter . The two-step GMM estimates are found to be rather close to the ML estimates. By contrast, iterative and continuous-updating GMM procedures produce more biased and more dispersed estimators.Forward-looking model, monetary policy reaction function, GMM estimator , FIML estimator , small-sample properties of an estimator .

    GMM with Many Moment Conditions

    Get PDF
    This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variable (IV) estimation with many (possibly weak or uninformed) instruments and some panel data models covering moderate time spans and with correspondingly large numbers of instruments. Under certain regularity conditions, the GMM estimators are shown to converge in probability but not necessarily to the true parameter, and conditions for consistent GMM estimation are given. A general framework for the GMM limit distribution theory is developed based on epiconvergence methods. Some illustrations are provided, including consistent GMM estimation of a panel model with time varying individual effects, consistent LIML estimation as a continuously updated GMM estimator, and consistent IV structural estimation using large numbers of weak or irrelevant instruments. Some simulations are reported.Epiconvergence, GMM, Irrelevant instruments, IV, Large numbers of instruments, LIML estimation, Panel models, Pseudo true value, Signal, Signal Variability, Weak instrumentation

    Symmetrically normalized instrumental-variable estimation using panel data

    Get PDF
    In this paper we discuss the estimation of panel data models with sequential moment restrictions using symmetrically normalized GMM estimators. These estimators are asymptotically equivalent to standard GMM but are invariant to normalization and tend to have a smaller finite sample bias. They also have a very different behaviour compared to standard GMM when the instruments are poor. We study the properties of SN-GMM estimators in relation to GMM, minimum distance and pseudo maximum likelihood estimators for various versions of the AR(1) model with individual effects by mean of simulations. The emphasis is not in assessing the value of enforcing particular restrictions in the model; rather, we wish to evaluate the effects in small samples of using alternative estimating criteria that produce asymptotically equivalent estimators for fixed T and large N. Finally, as an empírical illustration, we estimate by SN-GMM employment and wage equations using panels of UK and Spanish firms

    Symmetrically normalized instrumental-variable estimation using panel data.

    Get PDF
    In this paper we discuss the estimation of panel data models with sequential moment restrictions using symmetrically normalized GMM estimators. These estimators are asymptotically equivalent to standard GMM but are invariant to normalization and tend to have a smaller finite sample bias. They also have a very different behaviour compared to standard GMM when the instruments are poor. We study the properties of SN-GMM estimators in relation to GMM, minimum distance and pseudo maximum likelihood estimators for various versions of the AR(1) model with individual effects by mean of simulations. The emphasis is not in assessing the value of enforcing particular restrictions in the model; rather, we wish to evaluate the effects in small samples of using alternative estimating criteria that produce asymptotically equivalent estimators for fixed T and large N. Finally, as an empírical illustration, we estimate by SN-GMM employment and wage equations using panels of UK and Spanish firms.Panel data; Instrumental variables; Symmetric normalization; Autoregressive models; Employment equations;

    Sliced Wasserstein Distance for Learning Gaussian Mixture Models

    Full text link
    Gaussian mixture models (GMM) are powerful parametric tools with many applications in machine learning and computer vision. Expectation maximization (EM) is the most popular algorithm for estimating the GMM parameters. However, EM guarantees only convergence to a stationary point of the log-likelihood function, which could be arbitrarily worse than the optimal solution. Inspired by the relationship between the negative log-likelihood function and the Kullback-Leibler (KL) divergence, we propose an alternative formulation for estimating the GMM parameters using the sliced Wasserstein distance, which gives rise to a new algorithm. Specifically, we propose minimizing the sliced-Wasserstein distance between the mixture model and the data distribution with respect to the GMM parameters. In contrast to the KL-divergence, the energy landscape for the sliced-Wasserstein distance is more well-behaved and therefore more suitable for a stochastic gradient descent scheme to obtain the optimal GMM parameters. We show that our formulation results in parameter estimates that are more robust to random initializations and demonstrate that it can estimate high-dimensional data distributions more faithfully than the EM algorithm
    corecore