2,501 research outputs found

    Realized volatility: a review

    Get PDF
    This paper reviews the exciting and rapidly expanding literature on realized volatility. After presenting a general univariate framework for estimating realized volatilities, a simple discrete time model is presented in order to motivate the main results. A continuous time specification provides the theoretical foundation for the main results in this literature. Cases with and without microstructure noise are considered, and it is shown how microstructure noise can cause severe problems in terms of consistent estimation of the daily realized volatility. Independent and dependent noise processes are examined. The most important methods for providing consistent estimators are presented, and a critical exposition of different techniques is given. The finite sample properties are discussed in comparison with their asymptotic properties. A multivariate model is presented to discuss estimation of the realized covariances. Various issues relating to modelling and forecasting realized volatilities are considered. The main empirical findings using univariate and multivariate methods are summarized.

    Vast volatility matrix estimation for high-frequency financial data

    Full text link
    High-frequency data observed on the prices of financial assets are commonly modeled by diffusion processes with micro-structure noise, and realized volatility-based methods are often used to estimate integrated volatility. For problems involving a large number of assets, the estimation objects we face are volatility matrices of large size. The existing volatility estimators work well for a small number of assets but perform poorly when the number of assets is very large. In fact, they are inconsistent when both the number, pp, of the assets and the average sample size, nn, of the price data on the pp assets go to infinity. This paper proposes a new type of estimators for the integrated volatility matrix and establishes asymptotic theory for the proposed estimators in the framework that allows both nn and pp to approach to infinity. The theory shows that the proposed estimators achieve high convergence rates under a sparsity assumption on the integrated volatility matrix. The numerical studies demonstrate that the proposed estimators perform well for large pp and complex price and volatility models. The proposed method is applied to real high-frequency financial data.Comment: Published in at http://dx.doi.org/10.1214/09-AOS730 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Portfolio Construction with Hierarchical Momentum

    Full text link
    This paper presents a portfolio construction approach that combines the hierarchical clustering of a large asset universe with the stock price momentum. On the one hand, investing in high-momentum stocks stabilizes portfolio performance across economic regimes and enhances risk-adjusted returns. On the other hand, hierarchical clustering of a high-dimensional asset universe ensures sparse diversification and mitigates the problems of increased drawdowns and large turnovers typically present in momentum portfolios. Moreover, the proposed portfolio construction approach avoids the covariance matrix inversion. An out-of-sample backtest on a non-survivorship-biased dataset of international stocks shows that hierarchical-momentum portfolios achieve substantially improved cumulative and risk-adjusted portfolio returns as well as decreased portfolio drawdowns compared to the model-free benchmarks net of transaction costs. Furthermore, we demonstrate that the unique characteristics of the hierarchical-momentum portfolios arise due to both dimensionality reduction via clustering and momentum-based stock selection

    Dynamic portfolio optimization with inverse covariance clustering

    Get PDF
    Market conditions change continuously. However, in portfolio investment strategies, it is hard to account for this intrinsic non-stationarity. In this paper, we propose to address this issue by using the Inverse Covariance Clustering (ICC) method to identify inherent market states and then integrate such states into a dynamic portfolio optimization process. Extensive experiments across three different markets, NASDAQ, FTSE and HS300, over a period of ten years, demonstrate the advantages of our proposed algorithm, termed Inverse Covariance Clustering-Portfolio Optimization (ICC-PO). The core of the ICC-PO methodology concerns the identification and clustering of market states from the analytics of past data and the forecasting of the future market state. It is therefore agnostic to the specific portfolio optimization method of choice. By applying the same portfolio optimization technique on a ICC temporal cluster, instead of the whole train period, we show that one can generate portfolios with substantially higher Sharpe Ratios, which are statistically more robust and resilient with great reductions in the maximum loss in extreme situations. This is shown to be consistent across markets, periods, optimization methods and selection of portfolio assets

    Bayesian forecasting and scalable multivariate volatility analysis using simultaneous graphical dynamic models

    Full text link
    The recently introduced class of simultaneous graphical dynamic linear models (SGDLMs) defines an ability to scale on-line Bayesian analysis and forecasting to higher-dimensional time series. This paper advances the methodology of SGDLMs, developing and embedding a novel, adaptive method of simultaneous predictor selection in forward filtering for on-line learning and forecasting. The advances include developments in Bayesian computation for scalability, and a case study in exploring the resulting potential for improved short-term forecasting of large-scale volatility matrices. A case study concerns financial forecasting and portfolio optimization with a 400-dimensional series of daily stock prices. Analysis shows that the SGDLM forecasts volatilities and co-volatilities well, making it ideally suited to contributing to quantitative investment strategies to improve portfolio returns. We also identify performance metrics linked to the sequential Bayesian filtering analysis that turn out to define a leading indicator of increased financial market stresses, comparable to but leading the standard St. Louis Fed Financial Stress Index (STLFSI) measure. Parallel computation using GPU implementations substantially advance the ability to fit and use these models.Comment: 28 pages, 9 figures, 7 table

    Non Stationarity and Market Structure Dynamics in Financial Time Series

    Get PDF
    This thesis is an investigation of the time changing nature of financial markets. Financial markets are complex systems having an intrinsic structure defined by the interplay of several variables. The technological advancements of the ’digital age’ have exponentially increased the amount of data available to financial researchers and industry professionals over the last decade and, as a consequence, it has highlighted the key role of iterations amongst variables. A critical characteristic of the financial system, however, is its time changing nature: the multivariate structure of the systems changes and evolves through time. This feature is critically relevant for classical statistical assumptions and has proven challenging to be investigated and researched. This thesis is devoted to the investigation of this property, providing evidences on the time changing nature of the system, analysing the implications for traditional asset allocation practices and proposing a novel methodology to identify and predict ‘market states’. First, I analyse how classical model estimations are affected by time and what are the consequential effects on classical portfolio construction techniques. Focusing on elliptical models of daily returns, I present experiments on both in-sample and out-of-sample likelihood of individual observations and show that the system changes significantly through time. Larger estimation windows lead to stable likelihood in the long run, but at the cost of lower likelihood in the short-term. A key implication of these findings is that the optimality of fit in finance needs to be defined in terms of the holding period. In this context, I also show that sparse models and information filtering significantly cope with the effects of non stationarity avoiding the typical pitfalls of conventional portfolio optimization approaches. Having assessed and documented the time changing nature of the financial system, I propose a novel methodology to segment financial time series into market states that we call ICC - Inverse Covariance Clustering. The ICC methodology allows to study the evolution of the multivariate structure of the system by segmenting the time series based on their correlation structure. In the ICC framework, market states are identified by a reference sparse precision matrix and a vector of expectation values. In the estimation procedure, each multivariate observation is associated to a market state accordingly to a minimisation of a penalized distance measure (e.g. likelihood, mahalanobis distance). The procedure is made computationally very efficient and can be used with a large number of assets. Furthermore, the ICC methodology allows to control for temporal consistency,S making it of high practical relevance for trading systems. I present a set of experiments investigating the features of the discovered clusters and comparing it to standard clustering techniques. I show that the ICC methodology is successful at clustering different states of the markets in an unsupervised manner, outperforming baseline standard models. Further, I show that the procedure can be efficiently used to forecast off-sample future market states with significant prediction accuracy. Lastly, I test the significance of increasing number of states used to model equity returns and how this parameter relates to the number of observations and the time consistency of the states. I present experiments to investigate a) the likelihood of the overall model as more states are spanned, b) the relevance of additional regimes measured by the number of observations clustered. I found that the number of “market states” that optimally define the system is increasing with the time spanned and the number of observations considered
    corecore