3,612 research outputs found

    Designating market maker behaviour in Limit Order Book markets

    Full text link
    Financial exchanges provide incentives for limit order book (LOB) liquidity provision to certain market participants, termed designated market makers or designated sponsors. While quoting requirements typically enforce the activity of these participants for a certain portion of the day, we argue that liquidity demand throughout the trading day is far from uniformly distributed, and thus this liquidity provision may not be calibrated to the demand. We propose that quoting obligations also include requirements about the speed of liquidity replenishment, and we recommend use of the Threshold Exceedance Duration (TED) for this purpose. We present a comprehensive regression modelling approach using GLM and GAMLSS models to relate the TED to the state of the LOB and identify the regression structures that are best suited to modelling the TED. Such an approach can be used by exchanges to set target levels of liquidity replenishment for designated market makers

    Range-Based Estimation of Stochastic Volatility Models or Exchange Rate Dynamics are More Interesting Than You Think

    Get PDF
    We propose using the price range, a recently-neglected volatility proxy with a long history in finance, in the estimation of stochastic volatility models. We show both theoretically and empirically that the log range is approximately Gaussian, in sharp contrast to popular volatility proxies, such as log absolute or squared returns. Hence Gaussian quasi-maximum likelihood estimation based on the range is not only simple, but also highly efficient. We illustrate and enrich our theoretical results with a Monte Carlo study and a substantive empirical application to daily exchange rate volatility. Our empirical work produces sharp conclusions. In particular, the evidence points strongly to the inadequacy of one-factor volatility models, favoring instead two-factor models with one highly persistent factor and one quickly mean reverting factor.

    Stock Returns and Roughness Extreme Variations: A New Model for Monitoring 2008 Market Crash and 2015 Flash Crash

    Get PDF
    We use Student’s t-copula to study the extreme variations in the bivariate kinematic time series of log–return and log–roughness of the S&P 500 index during two market crashes, the financial crisis in 2008 and the flash crash on Monday August 24, 2015. The stable and small values of the tail dependence index observed for some months preceding the market crash of 2008 indicate that the joint distribution of daily return and roughness was close to a normal one. The volatility of the tail and degree of freedom indices as determined by Student’s t-copula falls down substantially after the stock market crash of 2008. The number of degrees of freedom in the empirically observed distributions falls while the tail coefficient of the copula increases, indicating the long memory effect of the market crash of 2008. A significant change in the tail and degree of freedom indices associated with the intraday price of S&P 500 index is observed before, during, and after the flash crash on August 24, 2015. The long memory effect of the stock market flash crash of August 2015 is indicated by the number of degrees of freedom in the empirically observed distributions fall while the tail coefficient of the joint distribution increases after the flash crash. The small and stable value of degrees of freedom preceding the flash crash provides evidence that the joint distribution for intraday data of return and roughness is heavy-tailed. Time-varying long-range dependence in mean and volatility as well as the Chow and Bai-Perron tests indicate non-stability of the stock market in this period

    FORECASTING DAILY VOLATILITY USING RANGE-BASED DATA

    Get PDF
    Users of agricultural markets frequently need to establish accurate representations of expected future volatility. The fact that range-based volatility estimators are highly efficient has been acknowledged in the literature. However, it is not clear whether using range-based data leads to better risk management decisions. This paper compares the performance of GARCH models, range-based GARCH models, and log-range based ARMA models in terms of their forecasting abilities. The realized volatility will be used as the forecasting evaluation criteria. The conclusion helps establish an efficient forecasting framework for volatility models.Marketing,

    Non-Parametric Causality Detection: An Application to Social Media and Financial Data

    Get PDF
    According to behavioral finance, stock market returns are influenced by emotional, social and psychological factors. Several recent works support this theory by providing evidence of correlation between stock market prices and collective sentiment indexes measured using social media data. However, a pure correlation analysis is not sufficient to prove that stock market returns are influenced by such emotional factors since both stock market prices and collective sentiment may be driven by a third unmeasured factor. Controlling for factors that could influence the study by applying multivariate regression models is challenging given the complexity of stock market data. False assumptions about the linearity or non-linearity of the model and inaccuracies on model specification may result in misleading conclusions. In this work, we propose a novel framework for causal inference that does not require any assumption about the statistical relationships among the variables of the study and can effectively control a large number of factors. We apply our method in order to estimate the causal impact that information posted in social media may have on stock market returns of four big companies. Our results indicate that social media data not only correlate with stock market returns but also influence them.Comment: Physica A: Statistical Mechanics and its Applications 201

    Modeling time series with conditional heteroscedastic structure

    Get PDF
    Models with a conditional heteroscedastic variance structure play a vital role in many applications, including modeling financial volatility. In this dissertation several existing formulations, motivated by the Generalized Autoregressive Conditional Heteroscedastic model, are further generalized to provide more effective modeling of price range data well as count data. First, the Conditional Autoregressive Range (CARR) model is generalized by introducing a composite range-based multiplicative component formulation named the Composite CARR model. This formulation enables a more effective modeling of the long and short-term volatility components present in price range data. It treats the long-term volatility as a stochastic component that in itself exhibits conditional volatility. The Generalized Feedback Asymmetric CARR model presented in this dissertation is a generalization of the Feedback Asymmetric CARR model, with lagged cross-conditional range terms added to allow complete feedback across the two equations that model upward and downward price ranges. A regime-switching Threshold Asymmetric CARR model is also proposed. Its formulation captures both asymmetry and non-linearity, which are two main characteristics that exist in the price range data. This model handles asymmetry and non-linearity better than its range-based competitors, based on the Akaike’s Information Criteria. In addition to the above models, a Time Varying Zero Inflated Poisson Integer GARCH model is introduced. This model enables the modeling of time series of count data with excess number of zeroes where this excess varies with time. In this model, the zero inflation component is modeled either as a deterministic function of time or as a vector of stochastic variables --Abstract, page iv

    The role of trading intensity in duration modelling and price discovery : evidence from the European carbon market

    Get PDF
    In this study, trading intensity is employed to investigate the role of information and liquidity in duration modelling and price discovery in the two largest exchanges of the European Carbon market, namely European Climate Exchange (ECX) and Nord Pool (NP). First, duration modelling is examined for the first time in this market, and existing ACD models are empirically extended to explore the impact of stylized facts, such as non-linear effects of trading intensity and OTC transactions. Second, the “time dimension” of information is investigated focusing on the informational content of trading intensity. A Smooth-Transition-Mixture of Weibull Distributions ACD (STM-ACD) model that distinguishes between three types of trades is proposed. Time, volume and OTC transactions measure how related related a trade is to information. Third, the price impact of the “time dimension” of information is examined. A new dynamic expectations, structural pricing model is proposed in order to account for the learning process of traders and their expectations. Trading intensity is used to measure the sensitivity of market participants to information and liquidity. The main findings indicate that empirical adjustments significantly improve duration modelling. In consistence with Bauwens et al. (2004), the specification of the conditional mean contributes more to model performance. Trading intensity appears to create a momentum, especially in ECX, whereas OTC transactions seem to slow down the trading process, probably due to information inflow, especially in NP. Furthermore, similar to Easley and O’Hara (1992) higher trading intensity is associated with increased presence of information. Trading intensity is found to be able to distinguish among three different types of trades, according to their informational content. The timing of acquiring information can make it further exploitable. A significant proportion of uninformed traders in the Carbon market is found to observe the market trying to extract price unresolved information. Consequently, informed traders are found to act strategically, according to Kyle (1985), but they are less efficient in covering their actions as market gains complexity, mainly because of higher liquidity levels and improved learning process. In addition, large transactions appear to increase the information price component, while the liquidity component seems to asymmetrically decrease, probably due to economies of scale. Consequently, trading intensity appears to have a dual impact on price, spread and price change volatility, which is determined by current market conditions and dealers’ exposure to risk. Finally, market making in this market seems to be profitable only when expected trading intensity is low

    Modelling commodity value at risk with Psi Sigma neural networks using open–high–low–close data

    Get PDF
    The motivation for this paper is to investigate the use of a promising class of neural network models, Psi Sigma (PSI), when applied to the task of forecasting the one-day ahead value at risk (VaR) of the oil Brent and gold bullion series using open–high–low–close data. In order to benchmark our results, we also consider VaR forecasts from two different neural network designs, the multilayer perceptron and the recurrent neural network, a genetic programming algorithm, an extreme value theory model along with some traditional techniques such as an ARMA-Glosten, Jagannathan, and Runkle (1,1) model and the RiskMetrics volatility. The forecasting performance of all models for computing the VaR of the Brent oil and the gold bullion is examined over the period September 2001–August 2010 using the last year and half of data for out-of-sample testing. The evaluation of our models is done by using a series of backtesting algorithms such as the Christoffersen tests, the violation ratio and our proposed loss function that considers not only the number of violations but also their magnitude. Our results show that the PSI outperforms all other models in forecasting the VaR of gold and oil at both the 5% and 1% confidence levels, providing an accurate number of independent violations with small magnitude

    Volatility Forecasting in Emerging Markets

    Get PDF
    This thesis examines the forecasting accuracy of implied volatility and GARCH(1,1) model volatility in the context of emerging equity markets. As a measure of risk volatility is a key factor in risk management and investing. Financial markets have become more global and the importance of volatility forecasting in emerging markets has increased. Emerging equity markets have more different risks than developed stock markets. As risk affects the potential return it is important to test and study how volatility models are able to forecast future volatility in emerging markets. The purpose of this thesis is to study the forecasting abilities and limitations of option implied volatility and GARCH(1,1) in the riskier emerging market environment. The majority of previous studies on volatility forecasting are focused on developed markets. Previous results suggest that in developed equity markets implied volatility provides an accurate short-term future volatility forecast whereas GARCH models offer a better long-term volatility forecast. The previous results in emerging market context have been in rather inconclusive. However, there is more evidence of GARCH(1,1) volatility being the most accurate future volatility forecaster. The main motivation behind this thesis is to examine which models is best suited for volatility forecasting in emerging equity markets. The forecasting accuracy of option implied volatility and GARCH(1,1) volatility is tested with an OLS regression model. The data consist of MSCI Emerging Market Price index data and corresponding option data from 1.1.2015 to 31.12.2019. In this thesis the daily closing prices of the index and option are used to compute daily and monthly implied volatility and GARCH(1,1) model volatility forecasts. Loss functions are applied to test the fit of the models. The results suggest that both models contain information about one-day future volatility as the explanatory power of both models is statistically significant for daily and monthly forecasts. The GARCH(1,1) volatility is a more accurate future volatility estimate than implied volatility for both daily and monthly volatilities. The monthly volatility forecast is more accurate for both models than the daily forecast. The results indicate that in both daily and monthly values GARCH(1,1) volatility is a more accurate estimate for future volatility than implied volatility. The GARCH(1,1) monthly volatility offers the best fit for future volatility with the highest predictive power and lowest error measures, suggesting that it is the most appropriate fit for future volatility forecasting in emerging equity markets
    corecore