475 research outputs found

    Do We Really Need Both BEKK and DCC? A Tale of Two Covariance Models

    Get PDF
    Large and very large portfolios of financial assets are routine for many individuals and organizations. The two most widely used models of conditional covariances and correlations are BEKK and DCC. BEKK suffers from the archetypal "curse of dimensionality" whereas DCC does not. This is a misleading interpretation of the suitability of the two models to be used in practice. The primary purposes of the paper are to define targeting as an aid in estimating matrices associated with large numbers of financial assets, analyze the similarities and dissimilarities between BEKK and DCC, both with and without targeting, on the basis of structural derivation, the analytical forms of the sufficient conditions for the existence of moments, and the sufficient conditions for consistency and asymptotic normality, and computational tractability for very large (that is, ultra high) numbers of financial assets, to present a consistent two step estimation method for the DCC model, and to determine whether BEKK or DCC should be preferred in practical applications.

    Do We Really Need Both BEKK and DCC? A Tale of Two Covariance Models

    Get PDF
    Large and very large portfolios of financial assets are routine for many individuals and organizations. The two most widely used models of conditional covariances and correlations are BEKK and DCC. BEKK suffers from the archetypal "curse of dimensionality" whereas DCC does not. This is a misleading interpretation of the suitability of the two models to be used in practice. The primary purposes of the paper are to define targeting as an aid in estimating matrices associated with large numbers of financial assets, analyze the similarities and dissimilarities between BEKK and DCC, both with and without targeting, on the basis of structural derivation, the analytical forms of the sufficient conditions for the existence of moments, and the sufficient conditions for consistency and asymptotic normality, and computational tractability for very large (that is, ultra high) numbers of financial assets, to present a consistent two step estimation method for the DCC model, and to determine whether BEKK or DCC should be preferred in practical applications.Conditional correlations, Conditional covariances, Diagonal models, Forecasting, Generalized models, Hadamard models, Scalar models, Targeting.

    Do We Really Need Both BEKK and DCC? A Tale of Two Multivariate GARCH Models

    Get PDF
    The management and monitoring of very large portfolios of financial assets are routine for many individuals and organizations. The two most widely used models of conditional covariances and correlations in the class of multivariate GARCH models are BEKK and DCC. It is well known that BEKK suffers from the archetypal “curse of dimensionalityâ€, whereas DCC does not. It is argued in this paper that this is a misleading interpretation of the suitability of the two models for use in practice. The primary purpose of this paper is to analyze the similarities and dissimilarities between BEKK and DCC, both with and without targeting, on the basis of the structural derivation of the models, the availability of analytical forms for the sufficient conditions for existence of moments, sufficient conditions for consistency and asymptotic normality of the appropriate estimators, and computational tractability for ultra large numbers of financial assets. Based on theoretical considerations, the paper sheds light on how to discriminate between BEKK and DCC in practical applications.forecasting;conditional correlations;Hadamard models;conditional covariances;diagonal models;generalized models;scalar models;targeting

    "Do We Really Need Both BEKK and DCC? A Tale of Two Multivariate GARCH Models"

    Get PDF
    The management and monitoring of very large portfolios of financial assets are routine for many individuals and organizations. The two most widely used models of conditional covariances and correlations in the class of multivariate GARCH models are BEKK and DCC. It is well known that BEKK suffers from the archetypal "curse of dimensionality", whereas DCC does not. It is argued in this paper that this is a misleading interpretation of the suitability of the two models for use in practice. The primary purpose of this paper is to analyze the similarities and dissimilarities between BEKK and DCC, both with and without targeting, on the basis of the structural derivation of the models, the availability of analytical forms for the sufficient conditions for existence of moments, sufficient conditions for consistency and asymptotic normality of the appropriate estimators, and computational tractability for ultra large numbers of financial assets. Based on theoretical considerations, the paper sheds light on how to discriminate between BEKK and DCC in practical applications.

    Estimation in nonlinear time series models

    Get PDF
    A general framework for analyzing estimates in nonlinear time series is developed. General conditions for strong consistency and asymptotic normality are derived both for conditional least squares and maximum likelihood types estimates. Ergodie strictly stationary processes are studied in the first part and certain nonstationary processes in the last part of the paper. Examples are taken from most of the usual classes of nonlinear time series models

    Do We Really Need Both BEKK and DCC? A Tale of Two Multivariate GARCH Models

    Get PDF
    The management and monitoring of very large portfolios of financial assets are routine for many individuals and organizations. The two most widely used models of conditional covariances and correlations in the class of multivariate GARCH models are BEKK and DCC. It is well known that BEKK suffers from the archetypal "curse of dimensionality", whereas DCC does not. It is argued in this paper that this is a misleading interpretation of the suitability of the two models for use in practice. The primary purpose of this paper is to analyze the similarities and dissimilarities between BEKK and DCC, both with and without targeting, on the basis of the structural derivation of the models, the availability of analytical forms for the sufficient conditions for existence of moments, sufficient conditions for consistency and asymptotic normality of the appropriate estimators, and computational tractability for ultra large numbers of financial assets. Based on theoretical considerations, the paper sheds light on how to discriminate between BEKK and DCC in practical applications.Conditional correlations, conditional covariances, diagonal models, scalar models, targeting, asymptotic theory.

    Residual Component Analysis of Hyperspectral Images - Application to Joint Nonlinear Unmixing and Nonlinearity Detection

    Get PDF
    International audienceThis paper presents a nonlinear mixing model for joint hyperspectral image unmixing and nonlinearity detection. The proposed model assumes that the pixel reflectances are linear combinations of known pure spectral components corrupted by an additional nonlinear term, affecting the end members and contaminated by an additive Gaussian noise. A Markov random field is considered for nonlinearity detection based on the spatial structure of the nonlinear terms. The observed image is segmented into regions where nonlinear terms, if present, share similar statistical properties. A Bayesian algorithm is proposed to estimate the parameters involved in the model yielding a joint nonlinear unmixing and nonlinearity detection algorithm. The performance of the proposed strategy is first evaluated on synthetic data. Simulations conducted with real data show the accuracy of the proposed unmixing and nonlinearity detection strategy for the analysis of hyperspectral images

    Realized volatility

    Get PDF
    Realized volatility is a nonparametric ex-post estimate of the return variation. The most obvious realized volatility measure is the sum of finely-sampled squared return realizations over a fixed time interval. In a frictionless market the estimate achieves consistency for the underlying quadratic return variation when returns are sampled at increasingly higher frequency. We begin with an account of how and why the procedure works in a simplified setting and then extend the discussion to a more general framework. Along the way we clarify how the realized volatility and quadratic return variation relate to the more commonly applied concept of conditional return variance. We then review a set of related and useful notions of return variation along with practical measurement issues (e.g., discretization error and microstructure noise) before briefly touching on the existing empirical applications.Stochastic analysis

    Nonparametric estimation of mean-squared prediction error in nested-error regression models

    Get PDF
    Nested-error regression models are widely used for analyzing clustered data. For example, they are often applied to two-stage sample surveys, and in biology and econometrics. Prediction is usually the main goal of such analyses, and mean-squared prediction error is the main way in which prediction performance is measured. In this paper we suggest a new approach to estimating mean-squared prediction error. We introduce a matched-moment, double-bootstrap algorithm, enabling the notorious underestimation of the naive mean-squared error estimator to be substantially reduced. Our approach does not require specific assumptions about the distributions of errors. Additionally, it is simple and easy to apply. This is achieved through using Monte Carlo simulation to implicitly develop formulae which, in a more conventional approach, would be derived laboriously by mathematical arguments.Supported in part by NSF Grant SES-03-18184

    Bayesian nonlinear hyperspectral unmixing with spatial residual component analysis

    Get PDF
    This paper presents a new Bayesian model and algorithm for nonlinear unmixing of hyperspectral images. The model proposed represents the pixel reflectances as linear combinations of the endmembers, corrupted by nonlinear (with respect to the endmembers) terms and additive Gaussian noise. Prior knowledge about the problem is embedded in a hierarchical model that describes the dependence structure between the model parameters and their constraints. In particular, a gamma Markov random field is used to model the joint distribution of the nonlinear terms, which are expected to exhibit significant spatial correlations. An adaptive Markov chain Monte Carlo algorithm is then proposed to compute the Bayesian estimates of interest and perform Bayesian inference. This algorithm is equipped with a stochastic optimisation adaptation mechanism that automatically adjusts the parameters of the gamma Markov random field by maximum marginal likelihood estimation. Finally, the proposed methodology is demonstrated through a series of experiments with comparisons using synthetic and real data and with competing state-of-the-art approaches
    corecore