1,545 research outputs found

    Volatility modeling and analysis via coupled Wishart process

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.Volatility refers to the measure for price fluctuation of specific financial instrument over time. It is a very important factor that can greatly influence investor’s decisions and concerns every other participant in the stock market. High volatility implies great insatiability and will definitely increase liquidity whereas low volatility indicates poor activeness. Hence the research on volatility draws great attention and interest of researchers from different backgrounds. Including the methods from data mining and machine learning is essential to improve the quality of volatility analysis. There are two main types of models on volatility analysis: the deterministic models and stochastic models. The deterministic models assume the volatility at particular time is a deterministic function of the past. The generalized autoregressive conditional heteroskedasticity (GARCH) model and its variations are in such category. The stochastic volatility (SV) models take the assumption that the volatility follows certain random process. Recent literature has shows that the stochastic models outperform the deterministic models to some extent. Among them, the Wishart process is a hot tool for modeling multivariate volatility. However, the stock market is closely connected with the society and human behavior, which makes it difficult to model. Almost all the existing models assume independence between our target objects: prices or the hidden covariance matrices behind them. These assumption works well for rough research or when the relationship between objects is weak. For a more solid research, the coupling relationship must be taken into account. In this thesis, we present two kinds of coupled Wishart process to model volatility: the homogenous coupled Wishart process and heterogenous coupled Wishart process. And corresponding algorithms are developed based on the models. The homogenous coupled Wishart process refers to model that our target objects belong to the same category. A two-chain coupled Wishart process is introduced in this thesis. Within such a model, the matrix in one chain is not only related with the past one from its own chain but also from its neighbors. After the derivation of its learning procedures, synthetic data are tested. Then, experiments are implemented with real data from two markets: U.S. and Hong Kong. In the two-chain coupled Wishart process, one chain indicates the volatility from U.S. stock market and the other the volatility indicates Hong Kong stock market. The latter one is the heterogenous coupled Wishart process. Unlike the homogenous one, in such a model, the covariance matrices are coupled with vectors, scalars or even a system. We aim to model how the outside influence from other kinds of data affect the evolving of covariance matrices. For time limitation, we make a simplified setup to illustrate how the heterogeneous coupling works. Then we construct the learning algorithm based on the setups and test it on synthetic data. To conclude, we include the thought of coupling into the analysis of volatility via Wishart process, with machine learning techniques. Sufficient experiments have proved the effectiveness of coupling in volatility analysis

    An analytic multi-currency model with stochastic volatility and stochastic interest rates

    Full text link
    We introduce a tractable multi-currency model with stochastic volatility and correlated stochastic interest rates that takes into account the smile in the FX market and the evolution of yield curves. The pricing of vanilla options on FX rates can be performed effciently through the FFT methodology thanks to the affinity of the model Our framework is also able to describe many non trivial links between FX rates and interest rates: a second calibration exercise highlights the ability of the model to fit simultaneously FX implied volatilities while being coherent with interest rate products

    Bayesian forecasting and scalable multivariate volatility analysis using simultaneous graphical dynamic models

    Full text link
    The recently introduced class of simultaneous graphical dynamic linear models (SGDLMs) defines an ability to scale on-line Bayesian analysis and forecasting to higher-dimensional time series. This paper advances the methodology of SGDLMs, developing and embedding a novel, adaptive method of simultaneous predictor selection in forward filtering for on-line learning and forecasting. The advances include developments in Bayesian computation for scalability, and a case study in exploring the resulting potential for improved short-term forecasting of large-scale volatility matrices. A case study concerns financial forecasting and portfolio optimization with a 400-dimensional series of daily stock prices. Analysis shows that the SGDLM forecasts volatilities and co-volatilities well, making it ideally suited to contributing to quantitative investment strategies to improve portfolio returns. We also identify performance metrics linked to the sequential Bayesian filtering analysis that turn out to define a leading indicator of increased financial market stresses, comparable to but leading the standard St. Louis Fed Financial Stress Index (STLFSI) measure. Parallel computation using GPU implementations substantially advance the ability to fit and use these models.Comment: 28 pages, 9 figures, 7 table

    Credit Risk Meets Random Matrices: Coping with Non-Stationary Asset Correlations

    Full text link
    We review recent progress in modeling credit risk for correlated assets. We start from the Merton model which default events and losses are derived from the asset values at maturity. To estimate the time development of the asset values, the stock prices are used whose correlations have a strong impact on the loss distribution, particularly on its tails. These correlations are non-stationary which also influences the tails. We account for the asset fluctuations by averaging over an ensemble of random matrices that models the truly existing set of measured correlation matrices. As a most welcome side effect, this approach drastically reduces the parameter dependence of the loss distribution, allowing us to obtain very explicit results which show quantitatively that the heavy tails prevail over diversification benefits even for small correlations. We calibrate our random matrix model with market data and show how it is capable of grasping different market situations. Furthermore, we present numerical simulations for concurrent portfolio risks, i.e., for the joint probability densities of losses for two portfolios. For the convenience of the reader, we give an introduction to the Wishart random matrix model.Comment: Review of a new random matrix approach to credit ris

    Locally adaptive factor processes for multivariate time series

    Full text link
    In modeling multivariate time series, it is important to allow time-varying smoothness in the mean and covariance process. In particular, there may be certain time intervals exhibiting rapid changes and others in which changes are slow. If such time-varying smoothness is not accounted for, one can obtain misleading inferences and predictions, with over-smoothing across erratic time intervals and under-smoothing across times exhibiting slow variation. This can lead to mis-calibration of predictive intervals, which can be substantially too narrow or wide depending on the time. We propose a locally adaptive factor process for characterizing multivariate mean-covariance changes in continuous time, allowing locally varying smoothness in both the mean and covariance matrix. This process is constructed utilizing latent dictionary functions evolving in time through nested Gaussian processes and linearly related to the observed data with a sparse mapping. Using a differential equation representation, we bypass usual computational bottlenecks in obtaining MCMC and online algorithms for approximate Bayesian inference. The performance is assessed in simulations and illustrated in a financial application

    The Gibbs Sampler with Particle Efficient Importance Sampling for State-Space Models

    Full text link
    We consider Particle Gibbs (PG) as a tool for Bayesian analysis of non-linear non-Gaussian state-space models. PG is a Monte Carlo (MC) approximation of the standard Gibbs procedure which uses sequential MC (SMC) importance sampling inside the Gibbs procedure to update the latent and potentially high-dimensional state trajectories. We propose to combine PG with a generic and easily implementable SMC approach known as Particle Efficient Importance Sampling (PEIS). By using SMC importance sampling densities which are approximately fully globally adapted to the targeted density of the states, PEIS can substantially improve the mixing and the efficiency of the PG draws from the posterior of the states and the parameters relative to existing PG implementations. The efficiency gains achieved by PEIS are illustrated in PG applications to a univariate stochastic volatility model for asset returns, a non-Gaussian nonlinear local-level model for interest rates, and a multivariate stochastic volatility model for the realized covariance matrix of asset returns

    Dynamics and sparsity in latent threshold factor models: A study in multivariate EEG signal processing

    Full text link
    We discuss Bayesian analysis of multivariate time series with dynamic factor models that exploit time-adaptive sparsity in model parametrizations via the latent threshold approach. One central focus is on the transfer responses of multiple interrelated series to underlying, dynamic latent factor processes. Structured priors on model hyper-parameters are key to the efficacy of dynamic latent thresholding, and MCMC-based computation enables model fitting and analysis. A detailed case study of electroencephalographic (EEG) data from experimental psychiatry highlights the use of latent threshold extensions of time-varying vector autoregressive and factor models. This study explores a class of dynamic transfer response factor models, extending prior Bayesian modeling of multiple EEG series and highlighting the practical utility of the latent thresholding concept in multivariate, non-stationary time series analysis.Comment: 27 pages, 13 figures, link to external web site for supplementary animated figure
    • …
    corecore