17 research outputs found

    Testing interval forecasts: a GMM-based approach

    Get PDF
    This paper proposes a new evaluation framework for interval forecasts. Our model free test can be used to evaluate intervals forecasts and High Density Regions, potentially discontinuous and/or asymmetric. Using a simple J-statistic, based on the moments de ned by the orthonormal polynomials associated with the Binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypotheses. Third, Monte-Carlo simulations show that for realistic sample sizes, our GMM test has good small-sample properties. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. It con rms that using this GMM test leads to major consequences for the ex-post evaluation of interval forecasts produced by linear versus nonlinear models.Interval forecasts, High Density Region, GMM.

    Multivariate Dynamic Probit Models: An Application to Financial Crises Mutation

    Get PDF
    In this paper we propose a multivariate dynamic probit model. Our model can be considered as a non-linear VAR model for the latent variables associated with correlated binary time-series data. To estimate it, we implement an exact maximum-likelihood approach, hence providing a solution to the problem generally encountered in the formulation of multivariate probit models. Our framework allows us to apprehend dynamics and causality in several ways. Furthermore, we propose an impulse-response analysis for such models. An empirical application on three nancial crises is nally proposed.Non-linear VAR, Multivariate dynamic probit models, Exact maximum likelihood, Impulse-response function, Financial crises

    Trade margins and exchange rate regimes: new evidence from a panel VAR

    Get PDF
    This paper studies how trade margins respond to output and terms of trade shocks in different exchange rate regimes within a panel of 23 OECD economies over the period 1988-2011. Using a panel VAR model, we confirm the predictions of entry models about the behaviour of export margins over the cycle. In addition, we find remarkable differences depending on the exchange rate regime. We document that fixed exchange rates have a positive effect on the extensive margin of trade in response to external shocks while flexible exchange rates have a pro-trade effect in response to output shocks. Our results imply that as long as extensive margins are a relevant portion of trade and external shocks are a major source of business cycle variability, the stabilization advantage of flexible exchange rates may be lower than previously thought

    Which are the SIFIs? : a Component Expected Shortfall (CES) approach to systemic risk

    Get PDF
    This paper proposes a component approach to systemic risk which allows to decompose the risk of the aggregate financial system (measured by Expected Shortfall, ES) while accounting for the firm characteristics. Developed by analogy with the Component Value-at-Risk concept, our new systemic risk measure, called Component ES (CES), presents several advantages. It is a hybrid measure, which combines the Too Interconnected To Fail and the Too Big To Fail logics. CES relies only on publicly available daily data and encompasses the popular Marginal ES measure. CES can be used to assess the contribution of a firm to systemic risk at a precise date but also to forecast its contribution over a certain period. The empirical application verifies the ability of CES to identify the most systemically risky firms during the 2007-2009 financial crisis. We show that our measure identifies the institutions labeled as SIFIs by the Financial Stability Board

    Testing for Granger Non-causality in Heterogeneous Panels

    Get PDF
    This paper proposes a very simple test of Granger (1969) non-causality for heterogeneous panel data models. Our test statistic is based on the individual Wald statistics of Granger non causality averaged across the cross-section units. First, this statistic is shown to converge sequentially to a standard normal distribution. Second, the semiasymptotic distribution of the average statistic is characterized for a fixed T sample. A standardized statistic based on an approximation of the moments of Wald statistics is hence proposed. Third, Monte Carlo experiments show that our standardized panel statistics have very good small sample properties, even in the presence of cross-sectional dependence

    Backtesting Value-at-Risk: From Dynamic Quantile to Dynamic Binary Tests

    Get PDF
    In this paper we propose a new tool for backtesting that examines the quality of Value-at-Risk (VaR) forecasts. To date, the most distinguished regression-based backtest, proposed by Engle and Manganelli (2004), relies on a linear model. However, in view of the dichotomic character of the series of violations, a non-linear model seems more appropriate. In this paper we thus propose a new tool for backtesting (denoted DB) based on a dynamic binary regression model. Our discrete-choice model, e.g. Probit, Logit, links the sequence of violations to a set of explanatory variables including the lagged VaR and the lagged violations in particular. It allows us to separately test the unconditional coverage, the independence and the conditional coverage hypotheses and it is easy to implement. Monte-Carlo experiments show that the DB test exhibits good small sample properties in realistic sample settings (5 % coverage rate with estimation risk). An application on a portfolio composed of three assets included in the CAC40 market index is finally proposed

    Parameter Estimation with Out-of-Sample Objective

    No full text
    We discuss parameter estimation in a situation where the objective is good out-of-sample performance. A discrepancy between the out-of-sample objective and the criterion used for in-sample estimation can seriously degrade the performance. Using the same criterion for estimation and evaluation typically ensures that the estimator is consistent for the ideal parameter value, however this approach need not be optimal. In this paper, we show that the optimal out-of-sample performance is achieved through maximum likelihood estimation (MLE), and that MLE can can be vastly better than the criterion based estimation (QBE). This theoretical result is analogous to the well known Cramer-Rao bound for in-sample estimation. A drawback of MLE is that it suffers from misspecification in two ways. First, the MLE (now a quasi MLE) is inefficient under misspecification. Second, the MLE approach involves a transformation of likelihood parameters to criterion parameters, which depends on the truth. So that misspecification can result in inconsistent estimation causing MLE to be inferior to QBE. We illustrate the theoretical result in a context with an asymmetric (linex) loss function, where the CBE performs on par with MLE when the loss is close to being symmetric, while the MLE clearly dominates QBE the the loss is asymmetric. We also illustrate the theoretical result in an applicable to long-horizon forecasting
    corecore