107 research outputs found

    M3 money demand and excess liquidity in the euro area

    Full text link
    Recent empirical studies have found evidence of unstable long run money demand functions if recent data are used. If the link between money balances and the macroeconomy is fragile, the rationale of monetary aggregates in the ECB strategy has to be doubted. In contrast we present a ``stable'' long run money demand relationship for M3 for the period 1983-2006. To obtain the result, the short run homogeneity restriction between money and prices is relaxed and a break in the income elasticity of money demand after 2001 is taken into account. Measures of excess liquidity do not show significant inflation pressures.The final publication is available at Springer via http://dx.doi.org/10.1007/s11127-010-9679-5. This publication was produced as part of the FINESS project, funded by the European Commission through the 7th Framework Programme under contract no. 217266 (http://www.finess-web.eu/)

    M3 Money Demand and Excess Liquidity in the Euro Area

    Get PDF
    Money growth in the euro area has exceeded its target since 2001. Likewise, recent empirical studies did not find evidence in favour of a stable long run money demand function. The equation appears to be increasingly unstable if more recent data are used. If the link between money balances and the macroeconomy is fragile, the rationale of monetary aggregates in the ECB strategy has to be doubted. In contrast to the bulk of the literature, we are able to identify a stable long run money demand relationship for M3 with reasonable long run behaviour. This finding is robust for different (ML and S2S) estimation methods. To obtain the result, the short run homogeneity restriction between money and prices is relaxed. In addition, a rise in the income elasticity after 2001 is taken into account. The break might be linked to the introduction of euro coins and banknotes. The monetary overhang and the real money gap do not indicate significant inflation pressures. The corresponding error correction model survives a battery of specification tests

    A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data

    No full text
    Abstract Background Regularized regression methods such as principal component or partial least squares regression perform well in learning tasks on high dimensional spectral data, but cannot explicitly eliminate irrelevant features. The random forest classifier with its associated Gini feature importance, on the other hand, allows for an explicit feature elimination, but may not be optimally adapted to spectral data due to the topology of its constituent classification trees which are based on orthogonal splits in feature space. Results We propose to combine the best of both approaches, and evaluated the joint use of a feature selection based on a recursive feature elimination using the Gini importance of random forests' together with regularized classification methods on spectral data sets from medical diagnostics, chemotaxonomy, biomedical analytics, food science, and synthetically modified spectral data. Here, a feature selection using the Gini feature importance with a regularized classification by discriminant partial least squares regression performed as well as or better than a filtering according to different univariate statistical tests, or using regression coefficients in a backward feature elimination. It outperformed the direct application of the random forest classifier, or the direct application of the regularized classifiers on the full set of features. Conclusion The Gini importance of the random forest provided superior means for measuring feature relevance on spectral data, but – on an optimal subset of features – the regularized classifiers might be preferable over the random forest classifier, in spite of their limitation to model linear dependencies only. A feature selection based on Gini importance, however, may precede a regularized linear classification to identify this optimal subset of features, and to earn a double benefit of both dimensionality reduction and the elimination of noise from the classification task.</p
    corecore