66 research outputs found

    Statistical Models for High Frequency Security Prices

    Get PDF
    This article studies two extensions of the compound Poisson process with iid Gaussian innovations which are able to characterize important features of high frequency security prices. The first model explicitly accounts for the presence of the bid/ask spread encountered in price-driven markets. This model can be viewed as a mixture of the compound Poisson process model by Press and the bid/ask bounce model by Roll. The second model generalizes the compound Poisson process to allow for an arbitrary dependence structure in its innovations so as to account for more complicated types of market microstructure. Based on the characteristic function, we analyze the static and dynamic properties of the price process in detail. Comparison with actual high frequency data suggests that the proposed models are sufficiently flexible to capture a number of salient features of financial return data including a skewed and fat tailed marginal distribution, serial correlation at high frequency, time variation in market activity both at high and low frequency. The current framework also allows for a detailed investigation of the ``market-microstructure-induced bias'' in the realized variance measure and we find that, for realistic parameter values, this bias can be substantial. We analyze the impact of the sampling frequency on the bias and find that for non-constant trade intensity, ``business'' time sampling maximizes the bias but achieves the lowest overall MSECompound Poisson Process; High Frequency Data; Market Microstructure; Characteristic Function; OU Process; Realized Variance Bias; Optimal Sampling

    Modelling realized variance when returns are serially correlated

    Get PDF
    This article examines the impact of serial correlation in high frequency returns on the realized variance measure. In particular, it is shown that the realized variance measure yields a biased estimate of the conditional return variance when returns are serially correlated. Using 10 years of FTSE-100 minute by minute data we demonstrate that a careful choice of sampling frequency is crucial in avoiding substantial biases. Moreover, we find that the autocovariance structure (magnitude and rate of decay) of FTSE-100 returns at different sampling frequencies is consistent with that of an ARMA process under temporal aggregation. A simple autocovariance function based method is proposed for choosing the “optimal” sampling frequency, that is, the highest available frequency at which the serial correlation of returns has a negligible impact on the realized variance measure. We find that the logarithmic realized variance series of the FTSE-100 index, constructed using an optimal sampling frequency of 25 minutes, can be modelled as an ARFIMA process. Exogenous variables such as lagged returns and contemporaneous trading volume appear to be highly significant regressors and are able to explain a large portion of the variation in daily realized variance. -- Dieser Artikel untersucht die Auswirkungen von autokorrelierten ErtrĂ€gen auf das Maß der realisierten Varianz bei hochfrequenten Daten ĂŒber die ErtrĂ€ge. Es wird gezeigt, dass die realisierte Varianz ein verzerrter SchĂ€tzer fĂŒr die bedingte Varianz der ErtrĂ€ge bei Vorliegen von Autokorrelation ist. Unter Verwendung eines zehnjĂ€hrigen Datensatzes von Minutendaten des FTSE-100 wird dargestellt, dass eine sorgfĂ€ltige Auswahl der Stichprobenfrequenz unabdingbar zur Vermeidung von Verzerrungen ist. Eine einfache Methode zur Bestimmung der optimalen Stichprobenfrequenz, basierend auf der Autokovarianzfunktion, wird vorgeschlagen. Diese ergibt sich als die höchste Frequenz, bei der die vorhandene Autokorrelation noch einen vernachlĂ€ssigbaren Einfluss auf das Maß der realisierten Varianz hat. FĂŒr den betrachteten Datensatz ergibt sich eine optimale Frequenz von 25 Minuten. Unter Verwendung dieser Frequenz können die logarithmierten ErtrĂ€ge des FTSE-100 als ARFIMA Prozess modelliert werden.High frequency data,realized return variance,market microstructure,temporal aggregation,long memory,bootstrap

    Three essays on the econometric analysis of high frequency financial data.

    Get PDF
    This thesis is motivated by the observation that the time series properties of financial security prices can vary fundamentally with their sampling frequency. Econometric models developed for low frequency data may thus be unsuitable for high frequency data and vice versa. For instance, while daily or weekly returns are generally well described by a martingale difference sequence, the dynamics of intra-daily, say, minute by minute, returns can be substantially more complex. Despite this apparent conflict between the behavior of high and low frequency data, it is clear that the two are intimately related and that high frequency data carries a wealth of information regarding the properties of the process, also at low frequency. The objective of this thesis is to deepen our understanding of the way in which high frequency data can be used in financial econometrics. In particular, we focus on (i) how to model high frequency security prices, and (ii) how to use high frequency data to estimate latent variables such as return volatility. One finding throughout the thesis is that the choice of sampling frequency is of fundamental importance as it determines both the dynamics and the information content of the data. A more detailed description of the chapters follows below.Macroeconomics -- Models;

    Price signatures

    Get PDF
    Price signatures are statistical measurements that aim to detect systematic patterns in price dynamics localised around the point of trade execution. They are particularly useful in electronic trading because they uncovermarket dynamics, strategy characteristics, implicit execution costs, or counter-party trading behaviours that are often hard to identify, in part due to the vast amounts of data involved and the typically low signal to noise ratio.Because the signature summarises price dynamics over a specified time interval, it constitutes a curve (rather than a point estimate) and because of potential overlap in the price paths it has a non-trivial dependence structure which complicates statistical inference. In this paper, I show how recent advances in functional data analysis can be applied to study the properties of these signatures. To account for data dependence, I analyse and develop resampling-based bootstrap methodologies that enable reliable statistical inference and hypothesis testing. I illustrate the power of this approach using a number of case studies taken from a live trading environment in the over-the-counter currency market. I demonstrate that functional data analysis of price signatures can be used to distinguish between internalising and externalising liquidity providers in a highly effective data driven manner. This in turn can help traders to selectively engage with liquidity providers whose risk management style best aligns with their execution objectives

    Properties of realized variance for a pure jump process: calendar time sampling versus business time sampling

    Get PDF
    In this paper we study the impact of market microstructure effects on the properties of realized variance using a pure jump process for high frequency security prices. Closed form expressions for the bias and mean squared error of realized variance are derived under alternative sampling schemes. Importantly, we show that business time sampling is generally superior to the common practice of calendar time sampling in that it leads to a reduction in mean squared error. Using IBM transaction data we estimate the model parameters and determine the optimal sampling frequency for each day in the data set. The empirical results reveal a downward trend in optimal sampling frequency over the last 4 years with considerable day-to-day variation that is closely related to changes in market liquidity

    A blocking and regularization approach to high dimensional realized covariance estimation

    Get PDF
    We introduce a regularization and blocking estimator for well-conditioned high-dimensional daily covariances using high-frequency data. Using the Barndorff-Nielsen, Hansen, Lunde, and Shephard (2008a) kernel estimator, we estimate the covariance matrix block-wise and regularize it. A data-driven grouping of assets of similar trading frequency ensures the reduction of data loss due to refresh time sampling. In an extensive simulation study mimicking the empirical features of the S&P 1500 universe we show that the ’RnB’ estimator yields efficiency gains and outperforms competing kernel estimators for varying liquidity settings, noise-to-signal ratios, and dimensions. An empirical application of forecasting daily covariances of the S&P 500 index confirms the simulation results

    Properties of bias corrected realized variance under alternative sampling schemes

    Get PDF
    In this paper I study the statistical properties of a bias corrected realized variance measure when high frequency asset prices are contaminated with market microstructure noise. The analysis is based on a pure jump process for asset prices and explicitly distinguishes among different sampling schemes, including calendar time, business time, and transaction time sampling. Two main findings emerge from the theoretical and empirical analysis. Firstly, based on the mean squared error criterion, a bias correction to realized variance allows for the more efficient use of higher frequency data than the conventional realized variance estimator. Secondly, sampling in business time or transaction time is generally superior to the common practice of calendar time sampling in that it leads to a further reduction in mean squared error. Using IBM transaction data, I estimate a 2.5 minute optimal sampling frequency for realized variance in calendar time which drops to about 12 seconds when a first order bias correction is applied. This results in a more than 65% reduction in mean squared error. If in addition prices are sampled in transaction time, a further reduction of about 20% can be achieved

    Last look

    Get PDF
    In over-the-counter markets, a trader typically sources indicative quotes from a number of competing liquidity providers, and then sends a deal request on the best available price for consideration by the originating liquidity provider. Due to the communication and processing latencies involved in this negotiation, and in a continuously evolving market, the price may have moved by the time the liquidity provider considers the trader’s request. At what point has the price moved too far away from the quote originally shown for the liquidity provider to reject the deal request? Or perhaps the request can still be accepted but only on a revised rate? ‘Last look’ is the process that makes this decision, i.e. it determines whether to accept—and if so at what rate—or reject a trader’s deal request subject to the constraints of an agreed trading protocol. In this paper, I study how the execution risk and transaction costs faced by the trader are influenced by the last look logic and choice of trading protocol. I distinguish between various ‘symmetric’ and ‘asymmetric’ last look designs and consider trading protocols that differ on whether, and if so to what extent, price improvements and slippage can be passed on to the trader. All this is done within a unified framework that allows for a detailed comparative analysis. I present two main findings. Firstly, the choice of last look design and trading protocol determines the degree of execution risk inherent in the process, but the effective transaction costs borne by the trader need not be affected by it. Secondly, when a trader adversely selects the iquidity provider she chooses to deal with, the distinction between the different symmetric and asymmetric last look designs fades and the primary driver of execution risk is the choice of trading protocol

    Execution in an aggregator

    Get PDF
    An aggregator is a technology that consolidates liquidity—in the form of bid and ask prices and amounts—from multiple sources into a single unified order book to facilitate ‘best-price’ execution. It is widely used by traders in financial markets, particularly those in the globally fragmented spot currency market. In this paper, I study the properties of execution in an aggregator where multiple liquidity providers (LPs) compete for a trader’s uninformed flow. There are two main contributions. Firstly, I formulate a model for the liquidity dynamics and contract formation process, and use this to characterize key trading metrics such as the observed inside spread in the aggregator, the reject rate due to the so-called ‘last-look’ trade acceptance process, the effective spread that the trader pays, as well as the market share and gross revenues of the LPs. An important observation here is that aggregation induces adverse selection where the LP that receives the trader’s deal request will suffer from the ‘Winner’s curse’, and this effect grows stronger when the trader increases the number of participants in the aggregator. To defend against this, the model allows LPs to adjust the nominal spread they charge or alter the trade acceptance criteria. This interplay is a key determinant of transaction costs. Secondly, I analyse the properties of different execution styles. I show that when the trader splits her order across multiple LPs, a single provider that has quick market access and for whom it is relatively expensive to internalize risk can effectively force all other providers to join her in externalizing the trader’s flow thereby maximizing the market impact and aggregate hedging costs. It is therefore not only the number, but also the type of LP and execution style adopted by the trader that determines transaction costs
    • 

    corecore