1,621 research outputs found

    On the Forecasting Accuracy of Multivariate GARCH Models

    Get PDF
    This paper addresses the question of the selection of multivariate GARCH models in terms of variance matrix forecasting accuracy with a particular focus on relatively large scale problems. We consider 10 assets from NYSE and NASDAQ and compare 125 model based one-step-ahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the Superior Predictive Ability (SPA) tests. Model performances are evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over/under predictions. When considering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over relatively unstable periods, i.e. dot-com bubble, the set of superior models is composed of more sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 2007-2008 financial crisis, accounting for non-stationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle (2002) with leverage in the conditional variances of the returns.Variance matrix, forecasting, multivariate GARCH, loss function, model confidence set, superior predictive ability

    Developments in the Analysis of Spatial Data

    Get PDF
    Disregarding spatial dependence can invalidate methods for analyzingcross-sectional and panel data. We discuss ongoing work on developingmethods that allow for, test for, or estimate, spatial dependence. Muchof the stress is on nonparametric and semiparametric methods.

    Statistical computation with kernels

    Get PDF
    Modern statistical inference has seen a tremendous increase in the size and complexity of models and datasets. As such, it has become reliant on advanced com- putational tools for implementation. A first canonical problem in this area is the numerical approximation of integrals of complex and expensive functions. Numerical integration is required for a variety of tasks, including prediction, model comparison and model choice. A second canonical problem is that of statistical inference for models with intractable likelihoods. These include models with intractable normal- isation constants, or models which are so complex that their likelihood cannot be evaluated, but from which data can be generated. Examples include large graphical models, as well as many models in imaging or spatial statistics. This thesis proposes to tackle these two problems using tools from the kernel methods and Bayesian non-parametrics literature. First, we analyse a well-known algorithm for numerical integration called Bayesian quadrature, and provide consis- tency and contraction rates. The algorithm is then assessed on a variety of statistical inference problems, and extended in several directions in order to reduce its compu- tational requirements. We then demonstrate how the combination of reproducing kernels with Stein’s method can lead to computational tools which can be used with unnormalised densities, including numerical integration and approximation of probability measures. We conclude by studying two minimum distance estimators derived from kernel-based statistical divergences which can be used for unnormalised and generative models. In each instance, the tractability provided by reproducing kernels and their properties allows us to provide easily-implementable algorithms whose theoretical foundations can be studied in depth

    Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint

    Full text link
    Inspired by ideas taken from the machine learning literature, new regularization techniques have been recently introduced in linear system identification. In particular, all the adopted estimators solve a regularized least squares problem, differing in the nature of the penalty term assigned to the impulse response. Popular choices include atomic and nuclear norms (applied to Hankel matrices) as well as norms induced by the so called stable spline kernels. In this paper, a comparative study of estimators based on these different types of regularizers is reported. Our findings reveal that stable spline kernels outperform approaches based on atomic and nuclear norms since they suitably embed information on impulse response stability and smoothness. This point is illustrated using the Bayesian interpretation of regularization. We also design a new class of regularizers defined by "integral" versions of stable spline/TC kernels. Under quite realistic experimental conditions, the new estimators outperform classical prediction error methods also when the latter are equipped with an oracle for model order selection

    Functional-input metamodeling: an application to coastal flood early warning

    Get PDF
    Les inondations en général affectent plus de personnes que tout autre catastrophe. Au cours de la dernière décennie du 20ème siècle, plus de 1.5 milliard de personnes ont été affectées. Afin d'atténuer l'impact de ce type de catastrophe, un effort scientifique significatif a été consacré à la constitution de codes de simulation numériques pour la gestion des risques. Les codes disponibles permettent désormais de modéliser correctement les événements d'inondation côtière à une résolution assez élevée. Malheureusement, leur utilisation est fortement limitée pour l'alerte précoce, avec une simulation de quelques heures de dynamique maritime prenant plusieurs heures à plusieurs jours de temps de calcul. Cette thèse fait partie du projet ANR RISCOPE, qui vise à remédier cette limitation en construisant des métamodèles pour substituer les codes hydrodynamiques coûteux en temps de calcul. En tant qu'exigence particulière de cette application, le métamodèle doit être capable de traiter des entrées fonctionnelles correspondant à des conditions maritimes variant dans le temps. À cette fin, nous nous sommes concentrés sur les métamodèles de processus Gaussiens, développés à l'origine pour des entrées scalaires, mais maintenant disponibles aussi pour des entrées fonctionnelles. La nature des entrées a donné lieu à un certain nombre de questions sur la bonne façon de les représenter dans le métamodèle: (i) quelles entrées fonctionnelles méritent d'être conservées en tant que prédicteurs, (ii) quelle méthode de réduction de dimension (e.g., B-splines, PCA, PLS) est idéale, (iii) quelle est une dimension de projection appropriée, et (iv) quelle est une distance adéquate pour mesurer les similitudes entre les points d'entrée fonctionnels dans la fonction de covariance. Certaines de ces caractéristiques - appelées ici paramètres structurels - du modèle et d'autres telles que la famille de covariance (e.g., Gaussien, Matérn 5/2) sont souvent arbitrairement choisies a priori. Comme nous l'avons montré à travers des expériences, ces décisions peuvent avoir un fort impact sur la capacité de prédiction du métamodèle. Ainsi, sans perdre de vue notre but de contribuer à l'amélioration de l'alerte précoce des inondations côtières, nous avons entrepris la construction d'une méthodologie efficace pour définir les paramètres structurels du modèle. Comme première solution, nous avons proposé une approche d'exploration basée sur la Méthodologie de Surface de Réponse. Elle a été utilisé efficacement pour configurer le métamodèle requis pour une fonction de test analytique, ainsi que pour une version simplifiée du code étudié dans RISCOPE. Bien que relativement simple, la méthodologie proposée a pu trouver des configurations de métamodèles de capacité de prédiction élevée avec des économies allant jusqu'à 76.7% et 38.7% du temps de calcul utilisé par une approche d'exploration exhaustive dans les deux cas étudiés. La solution trouvée par notre méthodologie était optimale dans la plupart des cas. Nous avons développé plus tard un deuxième prototype basé sur l'Optimisation par Colonies de Fourmis. Cette nouvelle approche est supérieure en termes de temps de solution et de flexibilité sur les configurations du modèle qu'elle permet d'explorer. Cette méthode explore intelligemment l'espace de solution et converge progressivement vers la configuration optimale. La collection d'outils statistiques utilisés dans cette thèse a motivé le développement d'un package R appelé funGp. Celui-ci est maintenant disponible dans GitHub et sera soumis prochainement au CRAN. Dans un travail indépendant, nous avons étudié l'estimation des paramètres de covariance d'un processus Gaussien transformé par Maximum de Vraisemblance (MV) et Validation Croisée. Nous avons montré la consistance et la normalité asymptotique des deux estimateurs. Dans le cas du MV, ces résultats peuvent être interprétés comme une preuve de robustesse du MV Gaussien dans le cas de processus non Gaussiens.Currently, floods in general affect more people than any other hazard. In just the last decade of the 20th century, more than 1.5 billion were affected. In the seek to mitigate the impact of this type of hazard, strong scientific effort has been devoted to the constitution of computer codes that could be used as risk management tools. Available computer models now allow properly modelling coastal flooding events at a fairly high resolution. Unfortunately, their use is strongly prohibitive for early warning, with a simulation of few hours of maritime dynamics taking several hours to days of processing time, even on multi-processor clusters. This thesis is part of the ANR RISCOPE project, which aims at addressing this limitation by means of surrogate modeling of the hydrodynamic computer codes. As a particular requirement of this application, the metamodel should be able to deal with functional inputs corresponding to time varying maritime conditions. To this end, we focused on Gaussian process metamodels, originally developed for scalar inputs, but now available also for functional inputs. The nature of the inputs gave rise to a number of questions about the proper way to represent them in the metamodel: (i) which functional inputs are worth keeping as predictors, (ii) which dimension reduction method (e.g., B-splines, PCA, PLS) is ideal, (iii) which is a suitable projection dimension, and given our choice to work with Gaussian process metamodels, also the question of (iv) which is a convenient distance to measure similarities between functional input points within the kernel function. Some of these characteristics - hereon called structural parameters - of the model and some others such as the family of kernel (e.g., Gaussian, Matérn 5/2) are often arbitrarily chosen a priori. Sometimes, those are selected based on other studies. As one may intuit and has been shown by us through experiments, those decisions could have a strong impact on the prediction capability of the resulting model. Thus, without losing sight of our final goal of contributing to the improvement of coastal flooding early warning, we undertook the construction of an efficient methodology to set up the structural parameters of the model. As a first solution, we proposed an exploration approach based on the Response Surface Methodology. It was effectively used to tune the metamodel for an analytic toy function, as well as for a simplified version of the code studied in RISCOPE. While relatively simple, the proposed methodology was able to find metamodel configurations of high prediction capability with savings of up to 76.7% and 38.7% of the time spent by an exhaustive search approach in the analytic case and coastal flooding case, respectively. The solution found by our methodology was optimal in most cases. We developed later a second prototype based on Ant Colony Optimization (ACO). This new approach is more powerful in terms of solution time and flexibility in the features of the model allowed to be explored. The ACO based method smartly samples the solution space and progressively converges towards the optimal configuration. The collection of statistical tools used for metamodeling in this thesis motivated the development of the funGp R package, which is now available in GitHub and about to be submitted to CRAN. In an independent work, we studied the estimation of the covariance parameters of a Transformed Gaussian Process by Maximum Likelihood (ML) and Cross Validation. We showed that both estimators are consistent and asymptotically normal. In the case of ML, these results can be interpreted as a proof of robustness of Gaussian ML in the case of non-Gaussian processes

    Cleaning large correlation matrices: tools from random matrix theory

    Full text link
    This review covers recent results concerning the estimation of large covariance matrices using tools from Random Matrix Theory (RMT). We introduce several RMT methods and analytical techniques, such as the Replica formalism and Free Probability, with an emphasis on the Marchenko-Pastur equation that provides information on the resolvent of multiplicatively corrupted noisy matrices. Special care is devoted to the statistics of the eigenvectors of the empirical correlation matrix, which turn out to be crucial for many applications. We show in particular how these results can be used to build consistent "Rotationally Invariant" estimators (RIE) for large correlation matrices when there is no prior on the structure of the underlying process. The last part of this review is dedicated to some real-world applications within financial markets as a case in point. We establish empirically the efficacy of the RIE framework, which is found to be superior in this case to all previously proposed methods. The case of additively (rather than multiplicatively) corrupted noisy matrices is also dealt with in a special Appendix. Several open problems and interesting technical developments are discussed throughout the paper.Comment: 165 pages, article submitted to Physics Report

    Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    Full text link
    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented
    corecore