37 research outputs found
Econometrics
Since the last decade we live in a digitalized world where many actions in human and economic life are monitored. This produces a continuous stream of new, rich and high quality data in the form of panels, repeated cross-sections and long time series . These data resources are available to many researchers at a low cost. This new erais fascinating for econometricians who can adress many open economic questions. To do so, new models are developed that call for elaborate estimation techniques. Fast personal computers play an integral part in making it possible to deal with this increased complexity. --
Option pricing with asymmetric heteroskedastic normal mixture models
This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities. Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%.asymmetric heteroskadastic models, finite mixture models, option pricing, out-of- sample prediction, statistical fit
Semiparametric multivariate volatility models
Estimation of multivariate volatility models is usually carried out by quasi maximum likelihood (QMLE), for which consistency and asymptotic normality have been proven under quite general conditions. However, there may be a substantial efficiency loss of QMLE if the true innovation distribution is not multinormal. We suggest a nonparametric estimation of the multivariate innovation distribution, based on consistent parameter estimates obtained by QMLE. We show that under standard regularity conditions the semiparametric efficiency bound can be attained. Without reparametrizing the conditional covariance matrix (which depends on the particular model used), adaptive estimation is not possible. However, in some cases the e?ciency loss of semiparametric estimation with respect to full information maximum likelihood decreases as the dimension increases. In practice, one would like to restrict the class of possible density functions to avoid the curse of dimensionality. One way of doing so is to impose the constraint that the density belongs to the class of spherical distributions, for which we also derive the semiparametric efficiency bound and an estimator that attains this bound. A simulation experiment demonstrates the e?ciency gain of the proposed estimator compared with QMLE. --Multivariate volatility,GARCH,semiparametric efficiency,adaptivity
Asymptotic properties of the Bernstein density copula for dependent data
Copulas are extensively used for dependence modeling. In many cases the data does
not reveal how the dependence can be modeled using a particular parametric copula.
Nonparametric copulas do not share this problem since they are entirely data based.
This paper proposes nonparametric estimation of the density copula for α-mixing data
using Bernstein polynomials. We study the asymptotic properties of the Bernstein
density copula, i.e., we provide the exact asymptotic bias and variance, we establish
the uniform strong consistency and the asymptotic normality
Asymptotic properties of the Bernstein density copula for dependent data
Copulas are extensively used for dependence modeling. In many cases the data does not reveal how the dependence can be modeled using a particular parametric copula. Nonparametric copulas do not share this problem since they are entirely data based. This paper proposes nonparametric estimation of the density copula for a-mixing data using Bernstein polynomials. We study the asymptotic properties of the Bernstein density copula, i.e., we provide the exact asymptotic bias and variance, we establish the uniform strong consistency and the asymptotic normality.Nonparametric estimation, Copula, Bernstein polynomial, a-mixing, Asymptotic properties, Boundary bias
A nonparametric copula based test for conditional independence with applications to granger causality
This paper proposes a new nonparametric test for conditional independence, which is based on the comparison of Bernstein copula densities using the Hellinger distance. The test is easy to implement because it does not involve a weighting function in the test statistic, and it can be applied in general settings since there is no restriction on the dimension of the data. In fact, to apply the test, only a bandwidth is needed for the nonparametric copula. We prove that the test statistic is asymptotically pivotal under the null hypothesis, establish local power properties, and motivate the validity of the bootstrap technique that we use in finite sample settings. A simulation study illustrates the good size and power properties of the test. We illustrate the empirical relevance of our test by focusing on Granger causality using financial time series data to test for nonlinear leverage versus volatility feedback effects and to test for causality between stock returns and trading volume. In a third application, we investigate Granger causality between macroeconomic variablesNonparametric tests, Conditional independence, Granger non-causality, Bernstein density copula, Bootstrap, Finance, Volatility asymmetry, Leverage effect, Volatility feedback effect, Macroeconomics
Econometrics
Since the last decade we live in a digitalized world where many actions in human and economic life are monitored. This produces a continuous stream of new, rich and high quality data in the form of panels, repeated cross-sections and long time series . These data resources are available to many researchers at a low cost. This new erais fascinating for econometricians who can adress many open economic questions. To do so, new models are developed that call for elaborate estimation techniques. Fast personal computers play an integral part in making it possible to deal with this increased complexity
A comparison of forecasting procedures for macroeconomic series: the contribution of structural break models
This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well.forecasting, change-points, Markov switching, Bayesian inference
Semiparametric multivariate volatility models
Estimation of multivariate volatility models is usually carried out by quasi maximum likelihood (QMLE), for which consistency and asymptotic normality have been proven under quite general conditions. However, there may be a substantial efficiency loss of QMLE if the true innovation distribution is not multinormal. We suggest a nonparametric estimation of the multivariate innovation distribution, based on consistent parameter estimates obtained by QMLE. We show that under standard regularity conditions the semiparametric efficiency bound can be attained. Without reparametrizing the conditional covariance matrix (which depends on the particular model used), adaptive estimation is not possible. However, in some cases the e?ciency loss of semiparametric estimation with respect to full information maximum likelihood decreases as the dimension increases. In practice, one would like to restrict the class of possible density functions to avoid the curse of dimensionality. One way of doing so is to impose the constraint that the density belongs to the class of spherical distributions, for which we also derive the semiparametric efficiency bound and an estimator that attains this bound. A simulation experiment demonstrates the e?ciency gain of the proposed estimator compared with QMLE
On the forecasting accuracy of multivariate GARCH models
This paper addresses the question of the selection of multivariate GARCH models in terms of variance matrix forecasting accuracy with a particular focus on relatively large scale problems. We consider 10 assets from NYSE and NASDAQ and compare 125 model based one-step-ahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the Superior Predicitive Ability (SPA) tests. Model per- formances are evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over/under predictions. When consid- ering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over rel- atively unstable periods, i.e. dot-com bubble, the set of superior models is composed of more sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 2007-2008 financial crisis, accounting for non-stationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle (2002) with leverage in the conditional variances of the returns.variance matrix, forecasting, multivariate GARCH, loss function, model confidence set, superior predictive ability